id
string
year
int64
authors
string
title
string
language
string
num_authors
int64
num_women_authors
int64
woman_as_first_author
int64
affiliations
string
num_affilitations
int64
at_least_one_international_affiliation
int64
international_affilitation_only
int64
international_authors
int64
names_international_authors
string
num_compay_authors
int64
names_company_authors
string
countries_affilitations
string
cities_affiliations
string
abstract
string
introduction
string
conclusion
string
num_topics
int64
topics
string
601_2024
2,024
Francesca Grasso, Ronny Patz, Manfred Stede
NYTAC-CC: A Climate Change Subcorpus based on New York Times Articles
ENG
3
1
1
Università di Torino, University of Potsdam
2
1
0
2
Ronny Patz, Manfred Stede
0
0
Italy, Germany
Turin, Potsdam
Over the past decade, the analysis of discourses on climate change (CC) has gained increased interest within the social sciences and the NLP community. Textual resources are crucial for understanding how narratives about this phenomenon are crafted and delivered. However, %while there is growing attention on social media resources, there still is a scarcity of datasets that cover CC in news media in a representative way. This paper presents a CC-specific subcorpus of 3,630 articles extracted from the 1.8 million New York Times Annotated Corpus, marking the first CC analysis on this data. The subcorpus was created by combining different methods for text selection to ensure representativeness and reliability, which is validated using ClimateBERT. To provide initial insights into the CC subcorpus, we discuss the results of a topic modeling experiment (LDA). These show the diversity of contexts in which CC is discussed in news media over time. with \verb|| and ends with \verb||. This is an abstract.
We present NYTAC-CC, a topic-specific subcorpus with 3,630 articles addressing climate change (CC), derived from the New York Times Annotated Corpus. This subcorpus covers a 20-year period, drawing from NYTAC’s collection of 1.8 million articles published between 1987 and 2007, which is available through the Linguistic Data Consortium. The original corpus, and thus also the subcorpus, includes a variety of metadata such as the `desk' (the newspaper branch) and both manually- and automatically-labeled content categories, with many articles also featuring hand-written summaries. %tags that categorize the content. Furthermore, a sizable subset of the articles includes hand-written summaries. The extensive use of NYTAC in NLP research over the last 15 years (e.g., AUTHOR) benefits CC researchers, allowing for detailed historical analysis of CC discussions in news media. This includes exploring how CC debates were interwoven with topics like domestic and foreign policy, science reporting, and arts and culture coverage. Unlike other CC-focused resources that often contain shorter documents, the NYTAC-CC subcorpus offers a diverse array of articles with varying lengths and complex content, making it a unique resource for investigating the evolution of CC narratives over time. The contribution of this paper is threefold: (i) We present the NYTAC-CC subcorpus %(by publishing the filenames) and its construction using blending of dictionary-based and supervised methods in order to ensure representativeness as well as validity and reliability, which are key in social science research AUTHOR. This hybrid approach addresses the challenges of refining a topic-specific subcorpus from a larger corpus, aiming to mitigate the limitations of traditional keyword-based sampling that often results in false positives. %This hybrid approach is designed to address the challenges associated with refining a topic-specific subcorpus or extracting relevant information from a larger, existing corpus. It aims to overcome the limitations of traditional sampling techniques, which often involve retrieving articles via a small set of keywords or bigrams and can lead to the inclusion of false positives in the datasets. %we propose a more laborate method for text extraction. %We describe a method for extracting a CC topic-specific subcorpus from a large article collection (here: NYTAC). Previous work was usually restricted to retrieving articles via a small set of keywords or bigrams; we propose here a more elaborate method that can contribute to the important goal of representativeness of data sets used in social science research (besides validity and reliability; cf., for example, AUTHOR). (ii) To demonstrate the validity of the subcorpus, and thus its reliability for further downstream tasks, we illustrate the results of a classification experiment using ClimateBERT AUTHOR. %a BERT-based model specifically trained on CC-related texts, While this experiment further validates that the articles in our NYTAC-CC subcorpus are, indeed, true positives, it also shows limitations of ClimateBERT. As ClimateBERT falsely classifies a number of true positives from our subcorpus as (false) negatives, we demonstrate that our approach achieves better results in ensuring recall of relevant CC articles from the NYTAC corpus. (iii) To gain initial insights into the CC subcorpus coverage, we use keyword analysis and topic modeling (specifically LDA) to track specifics of CC reporting over the 1987-2007 time span. The results show important trends over time, including key periods of reporting and a large variety of contexts in which CC is discussed. %Given this data source (subcorpus), we apply topic modeling (in particular LDA) to track the development of climate change reporting, specifically the coverage of subtopics, in the NYT for the time span 1987-2007. Thus, our goal is to provide a substantively new and relevant subcorpus, developed and validated in multiple iterations, and to then provide a first overview of the NYT's coverage of climate change during the time period covered in our corpus. Although several studies have explored U.S. print media’s reporting on anthropogenic CC, we cover an important 20-year period in which much of today's climate change discourse evolved. The rest of the paper is structured as follows: Section sct:relwork summarizes relevant related work. Section sct:corpus describes the process of creating the topic-specific subcorpus from the NYTAC through the aforementioned combined method, as well as results from classification task using ClimateBERT to validate the NYTAC-CC subcorpus. Section sct:corpstat offers key observations on the content and distribution of NYTAC-CC articles, including the outcome of an LDA experiment for unsupervised sub-topic exploration within the subcorpus. We conclude in Section sct:conc with suggestions for future work based on the subcorpus. %Section sct:vanilla reports on our LDA experiments, CEUR-WS's article template provides a consistent \LaTeX{} style for use across CEUR-WS publications, and incorporates accessibility and metadata-extraction functionality. This document will explain the major features of the document class. If you are new to publishing with CEUR-WS, this document is a valuable guide to the process of preparing your work for publication. The ``\verb|ceurart|'' document class can be used to prepare articles for any CEUR-WS publication, and for any stage of publication, from review to final ``camera-ready'' copy with {\itshape very} few changes to the source. This class depends on the following packages for its proper functioning: \item \verb|natbib.sty| for citation processing; \item \verb|geometry.sty| for margin settings; \item \verb|graphicx.sty| for graphics inclusion; \item \verb|hyperref.sty| optional package if hyperlinking is required in the document; \item \verb|fontawesome5.sty| optional package for bells and whistles. All the above packages are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages.
In this paper, we introduced the NYTAC-CC, a specialized subcorpus of 3,630 climate change articles from the New York Times Annotated Corpus spanning 1987 to 2007, marking the first CC analysis with this dataset. Addressing the lack of available news-based textual resources for NLP tasks, we employed a hybrid method combining keyword-based prefiltering and automatic classification to optimize the corpus construction. The representativeness of the subcorpus was confirmed using ClimateBERT, but additional manual inspection of ClimateBERT's classification of a relevant amount of true positives as (false) negatives also showed the model's limitations and the benefits of the hybrid approach chosen. Initial analyses of the subcorpus, including statistics, keyword searches, and topic modeling, highlight the corpus's potential for detailed diachronic and subtopic exploration. Thus, the NYTAC-CC subcorpus can be a useful resource for examining the historical narrative of climate change in news media. As it builds on the NYTAC corpus, it adds to previous work on this data, providing valuable insights for social science research. It also serves as a beneficial dataset for developing NLP applications that require a deep understanding of climate-related discourse. While the size of the subcorpus may restrict certain quantitative analyses, its rich, concentrated content is ideal for qualitative studies. Furthermore, it offers the potential for expansion and further integration with additional sources to enhance its utility and relevance for ongoing climate change research. Future work will expand on these findings with advanced topic modeling techniques and integrate more recent articles to enrich the diachronic analysis. In this paper, we introduced the NYTAC-CC, a topic-specific subcorpus of 3,630 articles on climate change (CC) %of variable length and heterogeneous content, drawn from the New York Times Annotated Corpus covering the span from 1987 to 2007. This marks the first CC analysis using this data set. Our work addresses the scarcity of available news-based textual resources for climate change, crucial for many NLP downstream tasks. %by providing a representative and reliable collection. We constructed the corpus using a hybrid approach that combines keyword-based prefiltering with automatic classification, effectively optimizing the extraction process. %and overcoming methodological limitations while leveraging on their strengths. The representativeness of the subcorpus is validated through the application of ClimateBERT, with classification results revealing both the model's intrinsic limitations %(primarily due to its limited context length) and the topic-consistency of the subcorpus. %in addressing CC-related news. Initial explorations, including basic statistics, keyword analysis, and topic modeling, have already highlighted the potential for nuanced diachronic analysis and more fine-grained subtopic exploration. As future work, we plan to extend these preliminary findings by employing advanced topic modeling techniques, such as structured topic modeling that systematically incorporates time and dynamic topic modeling. %These efforts aim to further enrich our understanding of the evolving narratives on climate change. Newspaper text documents and corpora can be complex in terms of their "inner logic" and metadata assignment. We worked with the NYTAC, which contains certain types of documents that involve a set of potentially very different news items (making content analysis more difficult) as well as noise in the manually-assigned topic terms. Building a reliable topic-specific subcorpus is therefore not trivial, and we experimented with different methods before settling on our "climate change" subcorpus. Our procedure contrasts with the typical of approach of previous research, which either uses just a small number of query terms (often only only the CC bigram) or involves manual confirmation. The former has problems for both precision and recall, and the latter is feasible only when the overall corpus is reasonably small (as in the case of AUTHOR. We see our method of keyword-based pre-filtering and subsequent supervised classification as a compromise between quality and efficiency requirements, suitable for relatively large document collections. We showed basic statistics on the patterns of climate change coverage in the NYT over a 20 year period, and then extended this by running topic modeling (LDA) on the texts when reduced to just their nominal terms. The resulting set of 18 topics seems to us to reflect the subtopic "landscape" of climate change quite well, and the topic-document assignment can now be used for more specific content analyses. The IDs of the texts that we identified as the NYTAC CC subcorpus will be made available for research by other licensees of the corpus. For our own future work, we plan to do further experiments with topic modeling, including more recent algorithms than LDA, such as structured topic modeling (where the time can modeled systematically) and dynamic topic modeling. Future Work: \item apply Classification Experiments on the whole corpus (?) \item perform both qualitative and quantitative dyachronic analysis on the NYTAC-CC over time both in terms of language use and of more fine-grained analysis on subtopics overtime
7
Lexical and Semantic Resources and Analysis
602_2024
2,024
Ludovica Pannitto, Lorenzo Albanesi, Laura Marion, Federica Maria Martines, Carmelo Caruso, Claudia S. Bianchini, Francesca Masini, Caterina Mauri
Did somebody say `Gest-IT'? A pilot exploration of multimodal data management
ENG
8
6
1
Università di Bologna, University of Poitiers
2
1
0
1
Claudia S. Bianchini
0
0
Italy, France
Bologna, Poitiers
The paper presents a pilot exploration of the construction, management and analysis of a multimodal corpus. Through a three-layer annotation that provides orthographic, prosodic, and gestural transcriptions, the Gest-IT resource allows to investigate the variation of gesture-making patterns in conversations between sighted people and people with visual impairment. After discussing the transcription methods and technical procedures employed in our study, we propose a unified CoNLL-U corpus and indicate our future steps.
Corpora represent the main tool for linguists to observe language in its real use and verify its general trends on both a quantitative and qualitative basis AUTHOR. Today, written language corpora are the most used, thanks to the greater availability of written data and the ease of processing. However, in speech, speakers appeal to numerous semiotic sources (e.g., spoken channel, gestures, proxemics, facial expressions, etc.) to create and convey meaning, and written corpora fail to account for this richness of modalities. To effectively study how language works, one should observe these different semiotic sources independently of each other and take their interactions into account AUTHOR. To capture this complexity, it is necessary to go beyond written data and use multimodal corpora, namely collections of audio-visual linguistic data that allow to both hear and see linguistic productions. Multimodal corpora can be used to analyze a wide variety of linguistic phenomena, especially those related to the use of body in human communication and interaction, and to the way bodily communication and spoken language interact to generate meaning. They are, therefore, the primary sources for the analysis of co-speech gestures AUTHOR and Sign Languages. Following AUTHOR, we define a multimodal corpus as `an annotated collection of coordinated content on communication channels including speech, gaze, hand gesture and body language'. Multimodal corpora come in various shapes, depending on the nature of the communication channel captured by the resource. For the sake of this article, we specifically restrict our attention to resources including both a video and audio recording of linguistic content). For a collection of available multimodal resources see \url{https://www.clarin.eu/resource-families/ multimodal-corpora} (spoken language) and (sign language), while a list of audio-only resources of spoken language can be found at }. The long tradition of analyzing written and spoken data has, over time, led to the development of transcription systems (such as \textsc{IPA}, orthography, and various prosodic conventions), which have become recognized standards within the linguists' community. These systems allow for an ``objective'' description of speech, independently of any considerations on the functions or the meanings of the described elements. For gestural data, some transcription systems have also been proposed, such as the Linguistic Annotation System for Gestures (LASG; AUTHOR), but none of them has attained enough acceptance to qualify as a standard AUTHOR. In the absence of an effective transcription system, hybrid solutions are often employed, situated between transcription and annotation, where the choices for describing gestural forms reflect their attributed functions (e.g.: a ``shrug'' is a shoulder movement, but this label is often used to refer to any pragmatic gesture of epistemic denial performed by moving the shoulders, without specifying either the characteristics of the shoulder movement or the movements of other body parts that may have contributed to the execution of the gesture). Similar challenges arise in studies on Sign Languages. Although there is a larger number of transcription systems for the latter (see, for instance, the review in AUTHOR), none of them has achieved the status of a universal standard. Additionally, attempts to adapt these systems to the transcription of gestures have so far been limited and not particularly successful. The lack of a transcription standard for gestures, that describes them independently of their function or meaning, hinders the ability to precisely investigate the relationship between speech and gesture. Another aspect concerns the nature of language data captured in multimodal resources: as the collection and standardization process for this kind of linguistic data is, by its very nature, much more complex, resources are often tailored to specific purposes and therefore involve task-oriented interactions (e.g., describing objects as in the NM-MoCap-Corpus AUTHOR; spatial comunication tasks as in the SaGA Corpus AUTHOR), thus capturing interactions that may be naturalistic but are inherently non-ecological, i.e. not naturally-occurring AUTHOR. Often, participants are asked to wear special devices such as headsets or trackers during the recordings AUTHOR, clearly altering the spontaneity of the interaction. The aim of the Gest-IT project is to build a multimodal corpus of ecological data, allowing for the integrated analysis of verbal and gestural communication in spontaneous interactions. In this paper, we will focus on the protocol of multimodal data management that we tested for this resource. We will first discuss the main existing multimodal resources (Section~sec:related), showing how, as of today, there doesn't seem to be any ecological, accessible, multimodal corpus for Italian. We will then introduce the Gest-IT pilot resource and present its main features with respect to existing resources (Section~sec:gestit). Section~sec:schema outlines the main design choices taken for the creation of our resource and Section~sec:future describes the path ahead.
The aim of this paper is to share with the scientific community the protocol developed to build a multimodal resource for the Italian language in terms of data collection (design, ethic issues, practicalities); data management and curation; data transcription, annotation and analysis. In doing so, we contribute to the debate on multimodal resource building, which is still lacking an established standard. In particular, our contribution in this respect is twofold. Firstly, our study suggests to adopt a three-layer transcription where the three layers (i.e., the orthographic transcription, the prosodic/interactional transcription, and the gestural transcription) align to each other, by using ELAN as a tool for transcribing and CoNLL-X as an interoperable output format. This has the advantage of grounding gestures as an integrated semiotic source within verbal conversation and ultimately allows to unveil gesture-speech regularities. Secondly, we propose an innovative approach for the annotation of gesture data. By relying on common practices in the field of sign languages, we suggest that gesture transcription should follow the same rationale of phonetic transcription, with a method that describes `objective' aspects that characterize the `form' of the gesture, thus allowing for an interpretation-independent annotation. Clearly, the project is still at a very preliminary stage. Next steps will include the complete orthographic, prosodic and gesture transcription of the recordings; a thorough revision and pseudoanymization.
13
Multimodal
603_2024
2,024
Livia Lilli, Laura Antenucci, Augusta Ortolan, Silvia Laura Bosello, Maria Antonietta D'Agostino, Stefano Patarnello, Carlotta Masciocchi, Jacopo Lenkowicz
Lupus Alberto: A Transformer-Based Approach for SLE Information Extraction from Italian Clinical Reports
ENG
8
6
1
Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Univeristà Cattolica del Sacro Cuore
2
0
0
0
0
0
0
Italy
Rome
Natural Language Processing (NLP) is widely used across several fields, such as in medicine, where information often originates from unstructured data sources. This creates the need for automated systems, in order to classify text and extract information from Electronic Health Records (EHRs). However, a significant challenge lies in the limited availability of pre-trained models for less common languages, such as Italian, and for specific medical domains. Our study aims to develop an NLP approach to extract Systemic Lupus Erythematosus (SLE) information from Italian EHRs at Gemelli Hospital in Rome. We then introduce Lupus Alberto, a fine-tuned version of AlBERTo, trained for classifying categories derived from three distinct domains: Diagnosis, Therapy and Symptom. We evaluated Lupus Alberto's performance by comparing it with other baseline approaches, selecting from available BERT-based models for the Italian language and fine-tuning them for the same tasks. Evaluation results show that Lupus Alberto achieves overall F-Scores equal to 79%, 87%, and 76% for the Diagnosis, Therapy, and Symptom domains, respectively. Furthermore, our approach outperformed other baseline models in the Diagnosis and Symptom domains, demonstrating superior performance in identifying and categorizing relevant SLE information, thereby improving clinical decision-making and patient management.
Natural Language Processing (NLP) is used in many applications, such as in the medical domain, where the huge amount of unstructured data sources coming from Electronic Health Records (EHRs) generates the need to develop automated systems for text classification and information extraction. However, employing such methods is challenging due to the scarcity of pre-trained models in less common languages like Italian, and for specific medical domains. In this study, we explored the Systemic Lupus Erythematosus (SLE), a complex pathology which involves different organ domains and can occur in patients at several levels of severity. For this reason, information about diagnoses, symptoms and therapies are used by physicians to characterize Lupus patients and to make better informed decisions about therapy changes or time for the next contact visit. However, these Lupic features are not always available in a structured format, then there is the need for NLP approaches in order to interpret clinical reports and extract the desired data. Based on the literature, large language models (LLMs) and transformer-based architectures represent the state-of-the-art for EHR classification tasks AUTHOR. This work aims to develop a transformer-based approach to identify SLE information from unstructured EHRs at the Italian Gemelli Hospital of Rome. We then propose Lupus Alberto, a fine-tuned version of Alberto AUTHOR, the available BERT-based model for the Italian language trained on Italian tweets. In order to assess the Lupus Alberto performance, we compare it with other baseline approaches, choosing among the BERT-based models available for the Italian language, always fine-tuned on the same tasks.
This study aims to deliver a transformer-based approach to extract SLE information from real-world data of the Gemelli Hospital of Rome. The scarcity of available models for the Italian language, specialized in Lupus, prompted us to develop a solution to automate the extraction process of SLE information from Italian EHRs. We especially focused on identifying features in the domains of Diagnosis, Therapy and Symptom, reported as of interest for SLE. Our work shows that Lupus Alberto presents competitive performance if compared to other baseline methods, outperforming especially in the classification of information in the Diagnosis and Symptom domains, achieving F-Scores of 79% and 76%, respectively.
20
In-domain IR and IE
604_2024
2,024
Fabio Tamburini
Complexifying BERT using LoRA Adapters
ENG
1
0
0
Università di Bologna
1
0
0
0
0
0
0
Italy
Bologna
This paper presents the first results of a pilot study for transforming a real-valued pre-trained transformer encoder into a complex-valued one. Following recent findings about pre-training using LoRA, the main idea is to employ complex-valued LoRA adapters to make the trick and continue the pre-training of a given Italian model for setting up the adapters. After pre-training, the proposed complex-valued model has been evaluated on a standardised benchmark for Italian natural-language understanding obtaining very encouraging results.
The works from Arjovsky et al. [1], Trouillon et al. [2] and Trabelsi et al. [3] proposing complex-valued Deep Neural Networks (DNNs) rose an increasing interest on this type on Neural Networks for their intrinsic ability to manage problems defined on complex-valued features. For example, in the fields of signal and image processing, speech, signal and audio data are naturally complex-valued after Fourier, Laplace or Complex Wavelet transforms. Yang et al. [4] and Eilers and Jiang [5] presented state-of-the-art Automatic Music Transcription systems and Wang et al. [6] evaluated their complex-valued embeddings in text classification, machine translation and language modeling with promising results. Quantum-inspired Machine Learning, an emerging topic of research in NLP and AI, is completely based on complex-valued features and tensors. Liu et al. [7] presented a survey of novel quantum-cognitively inspired models that solved the task of sentiment analysis with good performances and Tamburini [8] proposed a Quantum WSD system based on static complex-valued embeddings obtained modifying the ‘word2vec’ [9] code. The transformer encoder is a crucial component in transformer architectures [10]: primarily designed for processing input text and producing intermediate representations of input sequences, it consists of multiple layers of self-attention mechanisms and feed-forward neural networks, each contributing to the encoding process of both single words and entire sequences.LoRA (Low-Rank Adaptation) [11] is a technique recently introduced to efficiently fine-tune transformer models. Instead of updating all the parameters of a large pre-trained model, LoRA introduces a small set of additional trainable parameters. These parameters are incorporated into the transformer layers through low-rank matrices, allowing the model to adapt to new tasks with significantly reduced computational and storage requirements. This method preserves the original model’s performance while enabling quick and cost-effective customisation for specific applications. A very recent work [12] suggested that, by applying LoRA adapters, it is possible to pre-train large transformer models from scratch obtaining comparable performance with respect to regular pre-training. The main idea and contribution of this work consists in using LoRA adapters to convert a real-valued pre-trained transformer model into a complex-valued one being able to produce as output complex-valued word and sequence embeddings to be used in subsequent tasks. This process will require to continue the pre-training stage of a realvalued transformer model for setting up complex-valued LoRA adapters and train the global model to produce meaningful complex-valued embeddings. Section 2 describes the state-of-the-art about complex-valued transformers; Section 3 presents the proposed model describing the internal details of our complex-valued LoRA-based transformer. Section 4 illustrates the obtained results when testing our complex-valued model on a benchmark for evaluating Natural Language Understanding (NLU) systems for the Italian Language [13] and Section 5 discusses the results and draws some conclusions.
In our evaluation experiments we adopted the hyperparameters proposed in Basile et al. [13] for maintaining comparability, but our models are bigger and more complex and, maybe, need more training epochs and/or different learning rates to achieve a full convergence during the fine-tuning phase for evaluation. For example, we were forced to reduce the learning rate to 1e-5 for each model evaluated on TE benchmark to favour convergence. Again, we clarify that the goal of this work is not to beat other systems in the leaderboards, but to show the effectiveness of this approach for complexifying transformer architectures and we think that the results confirm our initial research question. Having complexified BERT matrices by adding LoRA adapters, we have no guarantee, in principle, that the system will not converge to the original BERT-based model setting all adapters to zero and nullify all imaginary part in the complex-valued model. We checked this in various ways and, as shown in Figure 1, some randomly chosen complex-valued components of token embeddings for the CmplxBERTLoRA_16 model show to cover the entire complex space in a uniform way, supporting the idea that the pre-training phase consistently adapted the starting real-valued model to produce reliable complex-valued embeddings. We did also some experiments with a real-valued LoRA model containing about the same number of parameters of CmplxBERTLoRA_8, adding real-valued adapters of rank 16, to investigate if a complex-valued transformer is able to produce better results that an equivalent realvalued one, but such experiments did not show any relevant performance differences between the two models. This work presented a relevant set of experiments for testing the idea of being able to complexify a Transformer encoder architecture like BERT by using complex-valued LoRA adapters. The obtained results on Italian models are very encouraging showing in a clear way that this technique is effective in transforming a real-valued pretrained model into a complex-valued one maintaining the same level of performance. We have to say that the UINAUIL benchmark is not without problems: TE dataset is very small and such large models struggle to reliably converge to a reasonable minimum during training leading to very unstable results. FactA is very problematic as well: classes are strongly skewed and the Max_Freq_Baseline, always choosing the highest-frequency class, is able to achieve an accuracy of 0.967! For all these reasons, we think that these two benchmarks should be excluded from any real evaluation. This pilot study presents only the first step for proposing building blocks based on LoRA adapters for complexifying any kind of transformer, either for representation learning or for text generation or for both processes together. All the complex-valued models were pre-trained on various GPUs for speeding up the experiments, but a general CmplxBERTLoRA model can be trained on a single 12/16GB GPU without problems, while the pretraining of a complex-valued BERT model from scratch would have required at least 4 NVIDIA A100 64GB GPUs for obtaining results in reasonable time. Using LoRA for ‘complexifying’ a model mitigates the need of complex and expensive computational infrastructures not easily available to any scholar. Code and models are available on github. .
13
Multimodal
605_2024
2,024
Antonio Scaiella, Stefano Costanzo, Elisa Passone, Danilo Croce, Giorgio Gambosi
Leveraging Large Language Models for Fact Verification in Italian
ENG
5
1
0
Università di Roma Tor Vergata, Reveal srl
2
0
0
0
0
1
Antonio Scaiella
Italy
Rome
In recent years, Automatic Fact Checking has become a crucial tool for combating fake news by leveraging AI to verify the accuracy of information. Despite significant advancements, most datasets and models are predominantly available in English, posing challenges for other languages. This paper presents an Italian resource based on the dataset made available in the FEVER evaluation campaign, created to train and evaluate fact-checking models in Italian. The dataset comprises approximately 240k examples, with over 2k test examples manually validated. Additionally, we fine-tuned a state-of-the-art LLM, namely LLaMA3, on both the original English and translated Italian datasets, demonstrating that fine-tuning significantly improves model performance. Our results suggest that the fine-tuned models achieve comparable accuracy in both languages, highlighting the value of the proposed resource.
In recent years, Automatic Fact Checking (AFC) has assumed a significant role as an instrument to identify fake news. AFC is a process that verifies the truthfulness and accuracy of information, claims, and data contained in a text or speech. The focus is on debunking disinformation and misinformation, intercepting errors, and verifying sources and facts. Automated fact-checking uses AI tools to identify, verify, and respond to misleading claims, using techniques based on natural language processing, machine learning, knowledge representation, and databases to automatically predict the truthfulness of claims AUTHOR. This is a complex process that involves searching, interpreting, and assessing information. As discussed in AUTHOR a NLP framework for automated fact-checking consists of three stages: claim detection to identify claims that require verification; evidence retrieval to find sources supporting or refuting the claim; and claim verification to assess the truthfulness of the claim based on the retrieved evidence. At first, automating the fact-checking process has been discussed in the context of computational journalism in works like AUTHOR, and has received significant attention in the computational linguistics and, in general, the artificial intelligence communities, surveyed in AUTHOR and more recently in AUTHOR and AUTHOR. In particular, in AUTHOR the authors expose a survey on the topic, describing the early developments that were surveyed in AUTHOR, which is an exhaustive overview of the subject. As with most machine learning paradigms AUTHOR, state-of-the-art methods require datasets and benchmarks. One of the most impactful campaigns for collecting a large-scale benchmark is FEVER (Fact Extraction and VERification) AUTHOR. In this context, fact-checking involves verifying whether a claim is supported by one or more pieces of evidence. FEVER is a publicly available dataset designed for claim verification against textual sources. It comprises about 180K claims generated by altering sentences extracted from Wikipedia. The claims are classified into three categories: \textsc{Supported} (a piece of evidence exists and it supports the claim), \textsc{Refutes} (a piece of evidence exists and it contradicts the claim), or \textsc{NotEnoughInfo} (there is insufficient evidence to verify the claim). The challenge, therefore, is to retrieve the relevant evidence and verify the accuracy of the claims, categorizing them with the correct label. Many works like FEVER have recently focused on building data and datasets for the task of Fact Verification, achieving very good results AUTHOR. However, all of these datasets are designed for the English language. Although multilingual models exist (e.g., in AUTHOR), fine-tuning a model on a specific language, pre-training it for a specific task and use case, could lead to a significant decline in quality if applied to another language. Few studies have worked on training models for languages other than English. An example is the work presented in AUTHOR, which focuses on developing automated claim detection for Dutch-language fact-checkers. In this work, we propose a FEVER-IT dataset in which the FEVER dataset has been translated into Italian to train the model for the Italian language. Inspired by SQUAD-IT AUTHOR and MSCOCO-IT AUTHOR, we worked to obtain quality data. Although the training set may be affected by translation errors, the test set will not, as it is composed of manually validated data. Furthermore, while the original FEVER dataset contained evidence only for \textsc{Supports} and \textsc{Refutes}, in this work we have also added and translated examples for the \textsc{NotEnoughInfo} category using the heuristics proposed in AUTHOR. This work extends the experience described in AUTHOR, where translations were done using Google API, by using publicly available models ( AUTHOR) and adding data for the \textsc{NotEnoughInfo} category. The contribution of this work is twofold. Firstly, we release FEVER-IT, a corpus with 228K claims each associated with at least one (possibly useful) piece of evidence, including a test set of 2,000 manually validated claims. In addition, we fine-tuned and validated a state-of-the-art model, LLaMA3 AUTHOR, on both the original English dataset and the Italian dataset. While this provides a high-performance model ready for the task in both languages, the primary goal is to assess whether the quality of the Italian data is comparable to the English one. By training the model separately on each dataset, we can evaluate its stability: if the model performs similarly on the manually validated Italian dataset and the English test set, we can conclude that the quality of the Italian data is on par with the English data. Additionally, we want to assess whether using an Italian train dataset, despite the noise from automatic translation, is truly beneficial. LLMs like LLaMA3 can already perform tasks in other languages through zero-shot or few-shot learning, without requiring fine-tuning on a specific dataset, especially if that dataset is noisy. Therefore, we aim to compare the performance on the test set between a LLaMA3 model that hasn’t been fine-tuned on the noisy Italian data and one that has been fine-tuned, to determine whether fine-tuning actually improves results or if the model performs on par or better without it. The experimental results show that the model without fine-tuning achieves an average accuracy of only about 45%. Fine-tuning on the English dataset yields about 90% mean accuracy, while fine-tuning on the Italian dataset results in a percentage quite similar to the fine-tuned English model and much greater than testing without fine-tuning}. The remainder of the paper is organized as follows: Section sec:relwork discusses related work, Section sec:feverit presents FEVER-IT, Section sec:expeval details the experimental measures, and Section sec:concl provides the conclusions.
In this work, we have introduced FEVER-IT, an Italian version of the FEVER dataset, designed to improve the training and evaluation of models for fact verification in the Italian language. Using a machine translation system, we translated a large-scale dataset of 228,000 claims/pieces of evidence pairs and manually validated 2,000 test instances to ensure meaningful evaluations. This enabled us to fine-tune a state-of-the-art LLM, specifically LLaMA3, and assess its performance in both English and Italian. Our experiments demonstrated that the multilingual model, without fine-tuning, performed similarly on both English and Italian datasets, though the accuracy and stability were limited. Fine-tuning significantly improved the model's performance, achieving over 90% accuracy in both languages. This underscores the importance and effectiveness of the translated dataset, even if it contains some noise. Future work will explore the performance of larger models and further refinement of the dataset to enhance accuracy and generalization capabilities or explore more complex settings such as those described in AUTHOR. The team would like to thank Monika Kakol for her invaluable support in the validation of the translations. This work was supported by Project ECS 0000024 Rome Technopole, - CUP B83C22002820006, NRP Mission 4 Component 2 Investment 1.5, Funded by the European Union - NextGenerationEU. \appendix
15
Fact Checking and Fake News Detection
606_2024
2,024
Dario Onorati, Davide Venditti, Elena Sofia Ruzzetti, Federico Ranaldi, Leonardo Ranaldi, Fabio Massimo Zanzotto
Measuring bias in Instruction-Following models with for the Italian Language
ENG
6
1
0
Sapienza Università di Roma, Università di Roma Tor Vergata, Idiap Research Institute
3
1
0
1
Leonardo Ranaldi
0
0
Italy, Switzerland
Rome, Martigny
Instruction-Following Language Models (IFLMs) are the state-of-the-art for solving many downstream tasks. Given their widespread use, there is an urgent need to measure whether the sentences they generate contain toxic information or social biases. In this paper, we propose Prompt Association Test for the Italian language itaPat: a new resource for testing the presence of social bias in different domains in IFLMs. This work also aims to understand whether it is possible to make the responses of these models more fair by using context learning, using ``one-shot anti-stereotypical prompts''.
Large Language Models (LLMs) and Instruction-Following Language Models (IFLMs) have achieved human performances in several NLP applications AUTHOR. Their ability to generate text or respond to prompts is increasingly performing and adaptive to different tasks. However, these models learn from data that frequently contains prejudices and stereotypical associations, as data inherently possesses and reflects the social biases generated by humans. Social bias refers to prejudices, stereotypes, or unfair assumptions individuals or groups hold about others based on factors like race, gender, ethnicity, socioeconomic status, or other social characteristics. The LLMs could embed stereotypical associations among social groups during training phase AUTHOR because they learn from huge amounts of data, which may reflect existing social prejudices. The presence of social bias in LLMs can lead to harmful consequences, such as generating biased or discriminatory outputs, perpetuating stereotypes, or unfairly marginalizing certain groups. According with the definition of AUTHOR, we consider a model bias if it systematically prefers the stereotyped association over an anti-stereotyped one. The social bias is the Achille's heel for many Natural Language Processing (NLP) applications AUTHOR. The presence of bias in the NLP models has been detected by means different strategies. AUTHOR proposed the Word Embedding Association Tests (WEAT) to detect the stereotypical associations regarding gender and races in the word embedding vectors, while AUTHOR extended it (SEAT) for the Pre-trained Language Models like BERTAUTHOR and ELMO AUTHOR. The stereotypical domains can be also detected by these sentence encoders using benchmarks AUTHOR. The increased use of LLMs AUTHOR and IFLMs AUTHOR, driven by their ease of use, leads to a series of social problems, including those related to the social bias. In fact, despite the increased capabilities on several tasks of these models, they often reproduce biases that can be learned from training data AUTHOR and generate toxic or offensive content AUTHOR. AUTHOR and AUTHOR extended WEAT and SEAT to detect the stereotypical associations respectively in LLMs and IFLMs. Previous works quantify the amount of associations among social groups generated by English-language models, and it is necessary to develop similar approaches for models, both multilingual and Italian, for the Italian language. In this paper, we propose the Italian Prompt Association Test itaPat: a new resource for testing the presence of social biases in Instruction-Following Language Models (IFLMs) for the Italian language. To quantify the presence of social bias, we created a dataset consisting of the adaptation of prompts present in pat. To enhance the Italian-centric nature of this dataset, the adaptations have been carefully designed according to ISTAT (Italian National Institute of Statistics) data. This involves the identification and selection of the most common Italian first names and nationalities that Italians statistically perceive most negatively based on social trends and prejudices. Then, we test these Italian prompts on both multilingual and Italian IFLMs, and observe whether their answers reflect stereotypical associations. If the model responses align with a stereotype, it indicates that it has internalized and reproduced the ``Italian stereotype" embedded in the data. Finally, we also explore the use of ``one-shot anti-stereotypical prompts'' as a strategy to guide models toward generating fairer and less biased responses. This approach is particularly advantageous because it circumvents the need for computationally intensive fine-tuning or retraining of the models, which would otherwise require substantial resources. Furthermore, our method successfully yields more fairer responses from Italian-focused language models across different social domains.
In this paper, we propose a Prompt Association Test for Italian language itaPat, a resource to quantify the social bias in multilingual and Italian Instruction-Following Language Models (IFLMs) in multiple domains, such as gender, race and age. itaPat is an adaptation of pat AUTHOR on the Italian language. Our experiments with different models show that multilingual model are better at responding to prompts than the Italian models, however they have a greater presence of bias. Consequently, this highlights a significant challenge in the development of AI language models: the need to balance performance improvements with ethical considerations, ensuring that advancements in model capabilities do not compromise the fairness and inclusivity of the outputs generated. Italian models often provide incorrect or repetitive responses, whether stereotypical or anti-stereotypical, which undermines the reliability of the Bias score. Among the Italian models evaluated, LLaMAntino demonstrates the best ability to generate accurate responses; however, it still exhibits a disproportionately high Bias score. Moreover, our proposed methods for enhancing the fairness of model responses lack consistency, as each model exhibits varying levels of responsiveness depending on the specific domain in question. This variability highlights the need for a more tailored approach to bias mitigation that considers the unique characteristics of each model and the contexts in which they operate. We expect itaPat to be an important tool for quantifying the presence of social bias in different dimensions and, therefore, for encouraging the creation of fairer in the multilingual and Italian IFLMs for the Italian language.
1
Language Models
607_2024
2,024
Shibingfeng Zhang, Gloria Gagliardi, Fabio Tamburini
Voice Activity Detection on Italian Language
ENG
3
2
1
Università di Bologna
1
0
0
0
0
0
0
Italy
Bologna
Voice Activity Detection (VAD) refers to the task of identifying human voice activity in noisy settings, playing a crucial role in fields like speech recognition and audio surveillance. However, most VAD research focuses on English, leaving other languages, such as Italian, under-explored. This study aims to evaluate and enhance VAD systems for Italian speech, with the goal of finding a solution for the speech segmentation component of the Digital Linguistic Biomarkers (DLBs) extraction pipeline for early mental disorder diagnosis. We experimented with various VAD systems and proposed an ensemble VAD system. Our ensemble system shows improvements in speech event detection. This advancement lays a robust foundation for more accurate early detection of mental health issues using DLBs in Italian.
Voice Activity Detection (VAD) refers to the task of identifying the presence of human voice activity in noisy speech, classifying utterance segments as “speech” or “non-speech”. Typically, it involves making binary decisions on each frame of a noisy signal [1]. VAD has a wide range of applications, serving as a crucial component in various fields such as telecommunications, speech recognition systems, and audio surveillance. Nevertheless, the great majority of current works focus on the applicationof VAD to English while there are many aspects that can affect the performance of transferring a VAD system from one language to another, potentially leading to suboptimal results. For instance, voice onset time may vary significantly between languages, affecting the system’s ability to detect speech activity accurately [2]. Additionally, differences in phonetic structures can further complicate the system’s effectiveness across languages. Given these factors, conducting research to evaluate various VAD systems on Italian speech would be highly valuable. Digital Linguistic Biomarkers (DLBs) indicate linguistic features automatically extracted directly from patients’ verbal productions that provide insights into their medical state [3]. Gagliardi and Tamburini [3] proposed the first DLBs extraction pipeline for the early diagnosis of mental disorders in Italian. The extraction of acoustic and rhythmic features relies heavily on the preprocessing step which consists of speech segmentation via VAD. The VAD system adopted by Gagliardi and Tamburini [3] is a statistical VAD system named “SSVAD v1.0” [4], which will be presented and compared to other VAD Systems in Section 2. In this project, we focus on VAD for the Italian language, an area that remains largely unexplored, aiming to find a VAD system that performs better and is more reliable than the one adopted in the original pipeline. The outcomes of this project will serve as a fundamental component in the pipeline for extracting DLBs and replacing the current VAD system. Moreover, our efforts will provide a robust foundation for future work in this domain, facilitating more accurate and early detection of mental health issues using linguistic biomarkers. Our main contributions are as follows: -TEsting and evaluating various VAD systems on Italian speech. - Proposing an ensemble VAD system that achieves superior results. This paper is structured into five sections. Section 2 presents the data resources and VAD systems leveraged in this work. Section 3 details the experiments and resources for testing VAD systems. Section 4 presents and discusses the experimental results. Finally, Section 5 draws conclusions.
In this study, we explored and enhanced Voice Activity Detection systems for the Italian language, a relatively under-explored area in speech processing. We experimented with various systems and integrated systems into an ensemble to improve detection accuracy. Our findings indicate that combining predictions from multiple models can lead to better results in detecting speech temporal intervals. This effective ensemble method will be used as a component of a Digital Linguistic Biomarkers extraction pipeline. By enhancing the accuracy of speech segmentation, this method provides a more reliable foundation for extracting meaningful linguistic features for the diagnosis of cognitive impairment. Future research could focus on refining the ensemble method by incorporating additional linguistic features into VAD systems and exploring their synergistic effects. Additionally, investigating the application of this approach to other languages and dialects could expand its utility.
13
Multimodal
608_2024
2,024
Giulio Leonardi, Dominique Brunato, Felice Dell'Orletta
Hits or Misses? A Linguistically Explainable Formula for Fanfiction Success
ENG
3
1
0
Università di Pisa, CNR-ILC
2
0
0
0
0
0
0
Italy
Pisa
This study presents a computational analysis of Italian fanfiction, aiming to construct an interpretable model of successful writing within this emerging literary domain. Leveraging explicit features that capture both linguistic style and semantic content, we demonstrate the feasibility of automatically predicting successful writing in fanfiction and we identify a set of robust linguistic predictors that maintain their predictive power across diverse topics and time periods, offering insights into the universal aspects of engaging storytelling. This approach not only enhances our understanding of fanfiction as a genre but also offers potential applications in broader literary analysis and content creation.
The growing proliferation of online literary content has led to the emergence of new genres and storytelling forms, with fanfiction being particularly popular among teens and young adults. Fanfiction consists of stories created by fans (mostly hobby authors) that extend or alter the narrative of existing popular media like books, movies, comics or games, and represents a significant portion of user-generated content on the web AUTHOR. In recent years, the widespread popularity that this genre has assumed has prompted research into the linguistic and stylistic elements that contribute to its success, %remain largely unexplored from a linguistic perspective, particularly in contrast mirroring studies conducted on more traditional literary genres AUTHOR, among others. Understanding the elements that contribute to narrative success is a fascinating area of research with implications across various fields, from literary analysis to digital humanities. From a socio-linguistic perspective, it can offer deeper insights into people and culture. It also has significant applications in areas such as personalized content recommendation and educational technology AUTHOR. While personal interests undoubtedly play a crucial role in predicting a reader's engagement with a literary content, the way information is presented can also evoke different reactions and levels of interaction, ultimately influencing the narrative's success. In this regards, recent advancements in Natural Language Processing (NLP) and machine learning offer a powerful lens for making explicit patterns that may explain the complex interplay between reader engagement and content success. This paper moves in this field and presents a computational analysis focused on Italian fanfiction, addressing the following research questions: i.) Can the success of Italian fanfiction be automatically predicted using stylistic and lexical features of the texts?; ii.) Which types of features demonstrate the highest predictive capability, and how consistent are these features across different time periods and thematic domains?; iii.) To what extent can these features be explained in terms of their contribution to predicting success? Our contributions. i.) We collected a corpus of Italian fanfiction stories enriched with metadata considered as proxies of their success; %; ii.) We investigate the relationship between stylistic and lexical features of stories and their success from a modeling perspective; iii.) We identified the most influential features in success prediction, showing the key role played by form and stylistic related features across time and thematic domains of fanfictions. The paper is structured as follows: Section sec:related_work briefly contextualizes our study among relevant literature; Section sec:corpus presents the reference corpus of Italian fanfiction stories that we collected; in Section sec:approach we provide an overview of the approach we devised including the description of features used for classification and the classifiers employed. Section sec:results discusses the main findings and offers a fine-grained analysis of the classification results in terms of feature explainability. In Section sec:conclusion we summarize key findings and outlining promising directions for future research in this field.
Understanding success factors in literary writing is an evolving area of cross-disciplinary research. This study on Italian fanfiction demonstrated the feasibility of predicting success using computational methods and explainability techniques. Notably, we found that features related to style and structure of texts show greater robustness than lexical ones across different domains and time periods. This suggests that the way a story is crafted may be more universally appealing than specific word choices or thematic elements. We believe that the implications of this study extend far beyond fanfiction research. On the one hand, it provides new methodologies for analyzing online literary phenomena offering potential contributions to digital humanities. From the NLP perspective, it could inform text generation models, potentially guiding the creation of content that resonates more effectively with readers. %to the growing field of digital humanities, offering new methodologies for analyzing and understanding online literary phenomena. From an NLP perspective, our findings can inform the development of more sophisticated text generation models, potentially guiding the creation of content that resonates more effectively with readers. Future research could explore the generalizability of these findings to other languages and genres, as well as the investigation on the dynamics of evolving reader preferences over time by also considering alternative measures to gauge success. Additionally, this study does not take into account the importance of the author; a potential future development would be to consider the impact of the author's popularity and productivity on the success of their fanfiction. \newpage \appendix
9
Textual Genres & Literature Linguistics
609_2024
2,024
Giulia Calvi, Riccardo Ginevra, Federica Iurescia
Combining Universal Dependencies and FrameNet to identify constructions in a poetic corpus: syntax and semantics of Latin felix and infelix in Virgilian poetics
ENG
3
2
1
Università Cattolica del Sacro Cuore
1
0
0
0
0
0
0
Italy
Milan
The paper is a pilot study which argues for a constructionist and computer-based approach to the syntactic and semantic analysis of a poetic corpus in Latin. We focus on the terms felix and on its opposite infelix and perform manual annotation of their occurrences in Virgil’s poems using Universal Dependencies for the syntactic analysis and FrameNet for the semantic one. Integrating the approaches of Dependency Syntax and Construction Grammar, we analyze the linguistic contexts in which the two terms occur and identify the different “constructions” (pairings of form and function) that they instantiate. Our methodology is language-independent and has the potential to aid scholars in the comparative analysis of poetic texts, allowing for the detection of hidden parallels in the style and poetics of different texts and authors.
The aim of the present study is to demonstrate the potential of a constructionist and computer-based approach to the analysis of syntax and semantics in a Latin poetic corpus. Our corpus comprises Virgil’s (70–19 BCE) literary works, namely (in chronological order of composition) the Eclogues (Ecl.) or Bucolics, the Georgics (Georg.), and the Aeneid (Aen.). We focus on two lemmas that have been studied as key terms in Virgil’s poetics (e.g. [1]; [2]): felix ‘productive, auspicious, fortunate, lucky, happy’ and its opposite infelix ‘unproductive, unlucky, ill-fated, miserable’. Bellincioni [1] analyzed the meanings of the two terms in Virgil’s works and detected differences in their poetic uses. On the one hand, felix is attested in a variety of contexts, ranging from its (likely original) concrete senses ‘productive’, ‘fruitful’ to more figurative senses linked with prosperity and well-being (granted by divine will). When it qualifies humans, felix takes the religious nuance of ‘favored’ by gods and fate. Gagliardi [2] also stressed the polysemy of felix in the Virgilian corpus: the lemma may refer to fecundity, propitious benevolence, or happiness, acquiring new connotations thanks to innovative uses in Virgil’s poetics. On the other hand, according to Bellincioni [1] infelix is rarely used in the technical sense of ‘infertile’ or in the senses ‘helpless’ and ‘inauspicious’, and in the majority of cases it rather seems to be used to qualify human beings as ‘ill-fated’. In order to identify patterns of the use of these terms in context, we combine a syntactic analysis with a semantic one. Following Osborne and Groß [4] and Osborne, Putnam and Groß [5], we integrate the approaches of Dependency Syntax and Construction Grammar. In doing so, we rely on the Universal Dependencies (UD) framework for the syntactic analysis and on the FrameNet approach for the semantic one, drawing inspiration from previous studies along these lines (e.g. [6]; [7]). This integrated approach allows us, on the one hand, to identify the linguistic contexts in which felix and infelix occur in Virgil’s corpus and, on the other hand, to analyze correspondences between the syntactic and the semantic levels of the Virgilian passages where these two terms are employed. By combining syntactic and semantic analyses, we explore the potential of an approach that integrates Universal Dependencies with FrameNet. In doing so, we aim at demonstrating that ours is a viable methodology to retrieve the contexts in which the two terms occur in Virgil’s corpus and to study the correspondences between their syntactic and semantic uses.
With the present study, we show the potential of a constructionist and computer-based approach in the analysis of a poetic corpus in Latin. By integrating syntactic information based on UD with semantic annotation grounded on FrameNet, we were able to identify recurrent constructions involving two key lemmas of Virgilian poetics, felix and infelix. This enabled us to uncover differences and parallels in the uses of these two terms within Virgil’s language. The present work is a pilot study which may pave the way for future research. Our approach is language-independent, and may thus be applied to different corpora across various languages and historical periods, for instance to explore similarities in the poetics of various authors within different traditions. Our investigation relied on manual annotation for both the syntactic and semantic analyses due to the lack or poor performance of automatic annotation systems for Latin poetry at the time of writing. The feasibility and effectiveness of such systems can vary significantly across different languages, depending on the resources available. Future improvements in automatic annotation for Latin may allow us to scale up this approach to perform analyses of even larger corpora. Virgil’s poems played a crucial role in shaping later poetic traditions for centuries: an interesting application of our integrated approach may thus be to investigate whether the same constructions attested in Virgil’s poems also occur in the works of later poets who are known to have been influenced by him, both in Latin (e.g. Valerius Flaccus’s Argonautica, Silius Italicus’s Punica, Publius Papinius Statius’s Thebaid), as well as in other languages, such as Italian (e.g. Dante Alighieri’s Commedia).
5
Latin Resources
610_2024
2,024
Nicolò Donati, Matteo Periani, Paolo Di Natale, Giuseppe Savino, Paolo Torroni
Generation and Evaluation of English Grammar Multiple-Choice Cloze Exercises
ENG
5
0
0
Università di Bologna, Zanichelli Editore
2
0
0
0
0
1
Giuseppe Savino
Italy
Bologna, Forlì
English grammar Multiple-Choice Cloze (MCC) exercises are crucial for improving learners' grammatical proficiency and comprehension skills. However, creating these exercises is labour-intensive and requires expert knowledge. Effective MCC exercises must be contextually relevant and engaging, incorporating distractors—plausible but incorrect alternatives—to balance difficulty and maintain learner motivation. Despite the increasing interest in utilizing large language models (LLMs) in education, their application in generating English grammar MCC exercises is still limited. Previous methods typically impose constraints on LLMs, producing grammatically correct yet uncreative results. This paper explores the potential of LLMs to independently generate diverse and contextually relevant MCC exercises without predefined limitations. We hypothesize that LLMs can craft self-contained sentences that foster learner's communicative competence. Our analysis of existing MCC exercise datasets revealed issues of diversity, completeness, and correctness. Furthermore, we address the lack of a standardized automatic metric for evaluating the quality of generated exercises. Our contributions include developing an LLM-based solution for generating MCC exercises, curating a comprehensive dataset spanning 19 grammar topics, and proposing an automatic metric validated against human expert evaluations. This work aims to advance the automatic generation of English grammar MCC exercises, enhancing both their quality and creativity.
English grammar Multiple-Choice Cloze (MCC) exercises are widely used tools for enhancing a learner's grammatical proficiency and comprehension skills. They consist of fill-the-gap questions where the gap must be filled by choosing one correct solution (key) among several options. The incorrect alternatives are called distractors. Devising these exercises is a labour-intensive process requiring expert knowledge in language teaching and content creation. The exercises must be contextually relevant to help learners understand how rules apply in real-life situations. This requires crafting sentences and scenarios that are both engaging and educational. Learners have different levels of proficiency, from beginners to advanced. Striking the right balance ensures that learners are neither bored nor frustrated, which is crucial for maintaining their motivation and progress. In MCC exercises this is done by choosing distractors that are incorrect but plausible, thus keeping the exercise challenging for the learner. Studies in Communicative Language Teaching demonstrate that the learner must possess the knowledge of grammatical structures and the ability to compose syntactically well-formed propositions, and they must also acquire the ability to employ grammatical forms in discourse AUTHOR AUTHOR. Recently, there has been a growing interest in applying LLMs in education AUTHOR. However, the adoption of LLMs for English grammar MCC exercise generation is still limited. Some proposals focus on testing vocabulary AUTHOR or use LLMs by constraining their generation capability, for example using fixed part-of-speech sequences AUTHOR. Although the outputs of these models are grammatically correct typically they lack creativity AUTHOR. In this work, we investigate the potential of LLMs in automatic exercise generation without hampering their creativity. Our working hypothesis is that LLMs can generate self-contained sentences, recreating situational contexts that elicit the communicative competence of the learner AUTHOR. Our main objective is to understand to what extent can LLMs generate accurate grammar exercises without providing predefined constraints or POS sequences. To pursue this objective, we analyzed the available English grammar MCC exercises dataset AUTHOR. We observed that it has limited diversity, some topics are underrepresented, and there are often mistakes. Existing literature does not offer a single agreed-upon automatic metric for evaluating the quality of the generated grammar exercises. Therefore, we set out to identify such a metric and validate its alignment with human judgment. In this paper, we present a novel solution utilizing an LLM to generate English grammar MCC exercises. Our contribution also focuses on curating an MCC dataset that spans 19 topics. Lastly, we propose an automatic metric to evaluate the exercise's correctness and verify the validity of our contribution thanks to human expert evaluation.
We investigated the use of an LLM to generate English MCC grammar exercises. To that end, we curated a new English grammar MCC exercises dataset. We devised metrics for the automatic evaluation of such exercises. We evaluated our work using said metrics, and a human study involving domain experts. Our findings demonstrate the model's ability to generate exercises suitable for educational use. The generated exercises exhibit a low similarity score, indicating that our method can effectively produce original exercises: a significant advantage from prior art, mostly relying on rule-based methods. We observe that human evaluation correlates positively with the proposed structural compliance metric, corroborating our metric as an indicator of exercise structure correctness and alignment with human expert preferences. We found that a key factor of our method was the availability of high-quality fine-tuning data. One limitation was the presence of many similar exercises in the SC-Dataset AUTHOR we used to build our resource from. After removing similar exercises, only 30% of the original data was left. Another limitation is the sensitivity of the evaluation metric to the Pattern Matcher, concerning the evaluation of the key and the distractors, which caused some false negatives. The curated dataset and model will be available to the community.}. We wish to thank Zanichelli editore for their support which enabled data up-sampling, human evaluation, and experimentation with their infrastructure. %Additionally, we appreciate their provision of the computational resources necessary for fine-tuning the model. We also thank Eleonora Cupin for her valuable contribution to the human evaluation of the dataset. \onecolumn \appendix
1
Language Models
611_2024
2,024
Anna Colli, Diego Rossini, Delphine Battistelli
A modal sense classifier for the French modal verb pouvoir
ENG
3
2
1
Paris Nanterre University
1
1
1
3
Anna Colli, Diego Rossini, Delphine Battistelli
0
0
France
Nanterre
In this paper we address the problem of modal sense classification for the French modal verb pouvoir in a transcribed spoken corpus. To the best of our knowledge, no studies have focused on this task in French. We fine-tuned various BERT-based models for French in order to determine which one performed best. It was found that the Flaubert-base-cased model was the most effective (F1-score of 0.94) and that the most frequent categories in our corpus were material possibility and ability, which are both part of the more global alethic category.
In this paper, we present our research into the automatic disambiguation of the French modal verb pouvoir (in English, this verb can be translated by can, could, may o might) in a corpus of semi-structured interviews1 . This problem statement is part of a broader quantitative and qualitative analysis currently underway on modal markers in order to better understand which kinds of modal categories are prevalent in this kind of corpus. As an NLP task, the problem of the automatic disambiguation of modal markers relies on what is generally called “modal sense classification” (MSC). As far as we know, no studies have focused on disambiguating modal verbs using a machine learning approach in French. Our aim is to fill this gap by finding the best fine-tuned BERT model to classify the semantic values of the French modal verb pouvoir in a transcribed spoken corpus. The article is organized as follows. In section 2 we review related work on the task of modal sense classification. Section 3 describes our corpus and our linguistic model. Section 4 presents the annotation of the corpus with an annotation scheme. Section 5 presents our experiments in fine-tuning different BERT models in order to choose the most effective one. Finally, in section 6 we discuss our results and in section 7 we close our contribution with conclusions and suggestions for future research.
This study demonstrates significant first progress in the automatic classification of the French verb pouvoir by finding the best fine-tuned BERT model. Moderate to substantial inter-annotator agreement led to merging some subcategories for more streamlined annotations. The flaubert-base-cased model, with contextual data augmentation, achieved an impressive F1-score of 0.94 with cross-validation, highlighting the importance of context (see section 4.2 “Corpus Context”). However, challenges persist, such as limited training data and the need for better annotation tools and more powerful computational resources. The model struggles with certain deontic usages that humans easily identify. Intentional ambiguity by the speaker also poses a challenge for both annotators and the model. Future work should expand and enrich the dataset and consider training on full texts instead of isolated sentences to capture context better. [8] propose a similar approach, emphasizing the importance of taking a large context around the target token and advocating for the use of full texts as context. In the future, we will also experiment with an augmented context window of 10 lines before and after the target token. These enhancements will improve model robustness and set the stage for further advancements in natural language processing, particularly for classifying semantic values of French modal verbs. This is the first step in a larger project that will soon include the verb devoir (must). More globally, the ultimate goal of our approach is to be able to identify which modal categories are prevalent in any given corpus [16]. Indeed, given that the verb pouvoir is present in all types of texts, the ability to identify its modality becomes a necessary tool for refining the overall analysis of modality in different tasks such as sentiment analysis ([17] or hedge detection ([18]).
7
Lexical and Semantic Resources and Analysis
612_2024
2,024
Andrea Esuli, Fabrizio Falchi, Marco Malvaldi, Giovanni Puccetti
You write like a GPT
ENG
4
0
0
CNR-ISTI, Professional writer and independent researcher
1
0
0
0
0
0
0
Italy
Pisa
We investigate how Raymond Queneau's Exercises in Style are evaluated by automatic methods for detection of artificially-generated text. We work with the Queneau's original French version, and the Italian translation by Umberto Eco. We start by comparing how various methods for the detection of automatically generated text, also using different large language models, evaluate the different styles in the opera. We then link this automatic evaluation to distinct characteristic related to content and structure of the various styles. This work is an initial attempt at exploring how methods for the detection of artificially-generated text can find application as tools to evaluate the qualities and characteristics of human writing, to support better writing in terms of originality, informativeness, clarity.
The extraordinary writing ability of the latest chatbots and virtual assistants based on Large Language Models (LLMs) poses a significant question for anyone who attempts to write today —- be they a scientist, a writer, or a lover: is it worth the effort to engage in the act of writing? For those not hindered by excessive laziness and who, with courage, still tackle writing with determination and passion, this question implies a more specific one: am I writing a text that an artificial intelligence could not have produced? We believe that the answer to this question may, in the future, come from the LLMs themselves given that they are designed to assess the probability of the occurrence of the next word in a text. We envision a future where LLMs, although widely used to produce essentially obvious texts, will assist those who still engage in writing to create texts worth reading, if only because the artificial intelligence, having read and statistically evaluated almost everything ever written, considers them non-obvious and distinct from what it would have produced itself. The ability of LLMs to evaluate the probability of the next word in a text stems from the extensive corpus of writing they are trained on. Consequently, their evaluation of a piece of writing is ultimately based on an indirect comparison between the given text and the entire body of literature they have been exposed to. Using LLMs to assess how much a text differs from the production capabilities of LLMs inherently implies an evaluation of the novelty it represents compared to known literature. Starting to move in this direction, this article explores whether an LLM can be used to help humans answer this question. In this first attempt we do this not based on the content intended for communication but on the style. We have conducted a preliminary study on the possibility of using LLMs to evaluate how and to what extent a certain writing style and/or a specific text differs from what a machine can achieve. We took as a reference Raymond Queneau's ``Exercises in Style'' AUTHOR, which draws from Erasmus of Rotterdam's ``De Utraque Verborum ac Rerum Copia'' AUTHOR a bestseller widely used for teaching how to rewrite pre-existing texts and how to incorporate them into a new composition. In Queneau's work, the same simple story is revisited each time in a different literary style. We asked ourselves and conducted experiments on how much the texts in various styles used by Queneau differ from the writing abilities of LLMs, which have acquired their skills by learning statistical relationships from vast amounts of text. Calvino had already attempted to answer this question: ``What would be the style of a literary automaton?'' He replied, ``The test for a poetic-electronic machine will be the production of traditional works, of poems with closed metric forms, of novels with all the rules''. We believe it has indeed happened this way, as today's chatbots and virtual assistants are built from a language model. In this work, we provide initial evidence that language models recognize those texts that are more traditional, particularly used in spoken language or by classical characters as more probable while they deem more unlikely experimental and innovative texts. However, we find evidence that even for powerful LLMs it remains difficult to cut a clear line between experimental texts and those that instead incur the risk of becoming unreadable.
This work is a first exploration of the idea of designing tools that evaluate how and to what extent a writing style and/or a specific text differs from what a machine can achieve. We tested for this task the use machine generated text detection tools, under the hypothesis of a correlation between their detection scores and our goal of discovering the many facets that build an original human written text. We applied them to Queneau's exercises in style, in which the same story is written using a rich and varied set of writing styles. We have found a consistent correlation between the scores assigned by detection methods, across detection methods and across languages. The comparison of the styles with their detection scores indicates that lower scores from detection methods are correlated with the use of unusual terms or syntax, while higher scores are more related to styles that are based on a clean and more prose, with a smooth transition among this two extremes. The ranks thus do not indicate a ``better'' or a more ``interesting'' style, yet they confirm Calvino's statement we reported in the introduction: content that is akin to a machine-generated one is the one that produce ``traditional'' content, following the main rules of writing. Writers willing to depart from sounding ``ordinary'' could indeed use detection methods to estimate these aspects on their content, with the caveat that while a mid-level detection score may suggest some original traits in text, low scores may not indicate a more original or interesting text, but they may likely derive from an obscure or plainly unreadable text. Given the positive results of this first investigation, future developments will be based on the use of texts specifically written for this activity. This will have the advantage of having full control over the contents and to have the guarantee that they have never been part of the LLMs training data. \FloatBarrier This work was partially supported by PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - "FAIR - Future Artificial Intelligence Research" - Spoke 1 "Human-centered AI", funded by European Union - NextGenerationEU.
9
Textual Genres & Literature Linguistics
613_2024
2,024
Elena Sofia Ruzzetti, Federico Ranaldi, Dario Onorati, Davide Venditti, Leonardo Ranaldi, Tommaso Caselli, Fabio Massimo Zanzotto
Assessing the Asymmetric Behaviour of Italian Large Language Models across Different Syntactic Structures
ENG
7
1
1
Università di Roma Tor Vergata, Sapienza Università di Roma, University of Edinburgh, University of Groningen
4
1
0
2
Leonardo Ranaldi, Tommaso Caselli
0
0
Italy, United Kingdom, Netherlands
Rome, Edinburgh, Groningen
While LLMs get more proficient at solving tasks and generating sentences, we aim to investigate the role that different syntactic structures have on models' performances on a battery of Natural Language Understanding tasks. We analyze the performance of five LLMs on semantically equivalent sentences that are characterized by different syntactic structures. To correctly solve the tasks, a model is implicitly required to correctly parse the sentence. We found out that LLMs struggle when there are more complex syntactic structures, with an average drop of 16.13(\pm 11.14) points in accuracy on Q\&A task. Additionally, we propose a method based on token attribution to spot which area of the LLMs encode syntactic knowledge, by identifying model heads and layers responsible for the generation of a correct answer.
As Large Language Models (LLMs) become more proficient at generating plausible and sounding sentences, it is compelling to determine the extent to which their competence in understanding more or less complex sentences is similar to that of a human being. Large Language Models (LLMs) excel at understanding and generating text that appears human-written. Thus, it is intriguing to determine whether the models' text comprehension aligns in some way with human cognitive processes. A peculiarity of natural languages is that the same meaning can be encoded by multiple syntactic constructions. In Italian, for instance, the unmarked sentence follows a subject-verb-object (SVO) word order. However, inversions of this ordering do not necessarily lead to ungrammatical sentences. % In particular, the Subject-Verb-Object order is frequent, but inversion does not lead to ungrammatical sentences. A case in point is represented by cleft sentence, i.e., sentences where the unmarked SVO sequence is violated. This corresponds to specific communicative functions, namely emphasize a component, and it is obtained by putting one element in a separate clause. %, with the use of an empty introductory word such as "that" or "it" in English or "che" in Italian. In particular, Object Relative Clauses -- where the element that is emphasized is the object of the sentence -- are difficult to understand AUTHOR. For example the sentence ``Sono i professori che i presidi hanno elogiato alla riunione d'istituto'' is more challenging for an Italian speaker than its semantically equivalent unmarked version ``I presidi hanno elogiato i professori alla riunione d'istituto'' where the SVO order is restored. Similarly, in Nominal Copular constructions, the inversion of subject and verb clause is documented to cause difficulties in understanding the meaning of the sentence AUTHOR. Hence, syntax plays a crucial role not only in the general construction of language but also in the native speakers ability to comprehend sentences: in fact, a correct syntactic parsing of the sentences is necessary to understand their meaning, and some syntactic structures are preferred over others. To what extent this preference is replicated by LLMs needs to be further explored. Syntax plays a fundamental role not only in the construction but also in the comprehension of language, as evidenced by the preference for certain structures over others. Currently, it is unclear whether this syntactic preference is also observable in LLMs. To understand this, it is necessary to conduct an analysis of the models that considers how syntactic structures are encoded within them. Related work has primarily been based on probing method (i 3 lavori citati), whose results are rather superficial and do not allow us to understand specifically which parts of the input sentence the model focuses on during processing. If the model shows some knowledge about syntax, there should be an area of the model responsible for that. We aim to detect the area of a model responsible for its syntactic knowledge. Extensive work has been devoted to understanding how Transformer-based architectures encode information and one main objective is to localize which area of the model is responsible for a certain behavior AUTHOR. Despite its usage as an explanation mechanism being debated AUTHOR, the attention mechanism is an interesting starting point given its wide use in Transformer architecture. While the attention weights alone cannot be used as an explanation of a model's behavior AUTHOR, an analysis that includes multiple components of the attention module is shown to be beneficial to obtain an interpretation of how a model processes an input sentence AUTHOR. Probing is a common method used to detect the presence of linguistic properties of language in models AUTHOR. Probing consists of training an auxiliary classifier on top of a model’s internal representation, which could be the output of a specific layer, to determine which linguistic property the model has learned and encoded. In particular, it has been proposed to probe Transformer-based models to reconstruct syntactic representations like dependency parse trees from their hidden states AUTHOR. Probing tasks concluded that syntactic features are encoded in the middle layers AUTHOR. Correlation analysis on the weights matrices of the monolingual BERT models confirmed the localization of syntactic information in the middle layers showing that the models trained on syntactically similar languages were similar on middle layers AUTHOR. While an altered word order seems to play a crucial role in Transformer-based models' ability to process language AUTHOR, the correlation between LLMs downstream performance and the encoding of syntax needs to be further explored. In this paper, we initially examine how syntax influences the LLMs' capability of understanding language. %and generating when exposed to different syntactic structures. To achieve this, we will analyze five open weights LLMs -- trained on the Italian Language either from scratch or during a finetuning phase -- and measure their performance in question-answering (Q\&A) tasks that require an implicit parsing of the roles of words in the sentence to provide the correct answer. We use an available set of Q\&A tasks designed for Italian speakers AUTHOR and propose similar template-based questions for two other datasets of Italian sentences characterized by different syntactic structures (Section sec:task). The results show that the models are affected by the different syntactic structures in solving the proposed tasks (Section sec:qa_res): LLMs struggle when more complex syntactic structures are present, with an average drop in accuracy of 16.13(\pm 11.14) points. We then propose a method -- based on norm-based attribution AUTHOR-- to localize where syntactic knowledge is encoded by identifying the models' attention heads and layers that are responsible for the generation of a correct answer (Section sec:localizingattr). Although some differences can be observed across the five LLMs, we notice that syntactic information is more widely included in the middle and top layers of the models.
In this paper, we have investigated how semantically equivalent sentences are processed by LLMs in Italian when their syntax differs. We tested LLMs trained on the Italian - or with Italian data in the pre-trainig material - and measured how their capabilities in a battery of Q\&A tasks %in a Q question-answering task that %implicitly rely on parsing the correct role of words in a sentence to be solved. Our findings confirm that cleft sentences and construction with an inversion of subject and verb are difficult to understand also for LLMs - similarly to what observed for humans. Furthermore, we have identified systematically using token-to-token attribution that syntactic information tends to be encoded in the middle and top layers of LLMs. \appendix
1
Language Models
614_2024
2,024
Giada Palmieri, Konstantinos Kogkalidis
Nominal Class Assignment in Swahili
ENG
2
1
1
Università di Bologna, Aalto University
2
1
0
1
Konstantinos Kogkalidis
0
0
Italy, Finland
Bologna, Espoo
We discuss the open question of the relation between semantics and nominal class assignment in Swahili. We approach the problem from a computational perspective, aiming first to quantify the extent of this relation, and then to explicate its nature, taking extra care to suppress morphosyntactic confounds. Our results are the first of their kind, providing a quantitative evaluation of the semantic cohesion of each nominal class, as well as a nuanced taxonomic description of its semantic content.
Swahili has a grand total of 18 nominal classes (\ie{} `genders'). There is no consensus on the extent to which the assignment of a noun to a given class is determined by its semantic content. We explore this question from a computational angle. Our experiments suggest semantic cohesion among nominal classes, and provide a summary of the taxonomic concepts associated to each class.
We explored the relation between semantics and nominal class assignment in Swahili. We approached the question from two complementary computational angles. Verifying first the presence of a relation using supervised learning, we then sought to explicate its nature using unsupervised topic modeling. Starting from a blank slate and without any prior interpretative bias, our methodology rediscovered go-to theories of Swahili nominal classification, while also offering room for further insights and explorations. Our work is among the first to tackle Bantu nominal assignment computationally, and the first to focus exclusively on semantics. Our methodology is typologically unbiased and computationally accessible, allowing for an easy extension to other languages, under the sole requirement of a dictionary. We make our scripts and generated artifacts publicly available at . We leave several directions open to future work. We have experimented with a single dataset, a single model and a single lexical database; varying either of these coordinates and aggregating the results should help debias our findings. We have only looked for semantic generalizations across hyperonymic taxonomies -- looking at other kinds of lexical relations might yield different semantic observations. Our chosen metric of relevance is by construction limited to first-order pairwise interactions, failing to account for exceptional cases or conditional associations. Finally, we had to resort to computational acrobatics through English in order to access necessary tools and resources. This is yet another reminder of the disparities in the pace of `progress' of language technology, and a call for the computational inclusion of typologically diverse languages.
22
Distributional Semantics
615_2024
2,024
Claudia Corbetta, Giovanni Moretti, Marco Carlo Passarotti
Join Together? Combining Data to Parse Italian Texts %and Addressing the Challenge of Non-Projectivity
ENG
3
1
1
Università di Bergamo, Università di Pavia, Università Cattolica del Sacro Cuore
3
0
0
0
0
0
0
Italy
Bergamo, Pavia, Milan
In this paper, we create and evaluate non-combined and combined models using Old and Contemporary Italian data to determine whether increasing the size of the training data with a combined model could improve parsing accuracy to facilitate manual annotation. We find that, despite the increased size of the training data, in-domain parsing performs better. Additionally, we discover that models trained on Old Italian data perform better on Contemporary Italian data than the reverse. We attempt to explain this result in terms of syntactic complexity, finding that Old Italian text exhibits higher sentence length and non-projectivity rate. with \verb|| and ends with \verb||. This is an abstract.
High-quality textual data (semi-)manually enhanced with different layers of metalinguistic annotation are extremely valuable resources for conducting linguistic analysis. As for the syntactic layer of annotation, the de facto standard for dependency-based annotation is Universal Dependencies (UD),.} an initiative that provides machine-readable annotations for a wide variety of languages, including historical languages AUTHOR. At the current state of art, Contemporary Italian is well-represented in UD, whereas Old Italian is only represented by one annotated text (a portion of the Divine Comedy of Dante Alighieri). The creation of additional Old Italian annotated data is therefore advisable. %to facilitate cross-Italian studies Since a fully manual annotation process is time-consuming and requires significant effort, we aim to expedite it by using a parser that pre-parses the data, leaving the human annotator with only a manual revision task. To address this, given the scarcity of Old Italian data, we create a combined parser using both Contemporary and Old Italian data. The objective is to determine whether a combined model with an expanded training dataset performs better compared to non-combined models (see AUTHOR for Spanish language and AUTHOR for Stanza combined models). \textcolor{blue}{Introduzione. Cerca letteratura su combined di modelli per lingue storiche. PARLARE DI CREAZIONE DI MODELLI CON UD - INTRODURRE UD E SINTASSI A DIPENDENZE. \textcolor{red}{VA RIFATTA DA ZERO Our aim is twofold: a) to evaluate whether creating a combined model with more linguistic data, even though from different periods, can achieve better scores compared to using a model based on in-domain data. This point could shed light on the possibility of using combined models with a large amount of data for parsing other Italian texts, specifically in Old Italian texts for which gold-annotated data are scarce. b) to understand, through the analysis of recurrent mistakes, possible differences in syntactic patterns. We aim to determine whether these differences are due to: i) a divergence in annotation style, or ii) variations in the diachrony of the language or genre. The paper is organised as follows: Section Sec1 provides a brief description of the Italian language, the syntactic resources and the Italian data available; Section Sec2 details the data used for the experiments, presents the performances of non-combined and combined models, and evaluates their performances; Section Sec3 analyzes the syntactic complexity of each test set (Old and Contemporary Italian) to address accuracy differences; and finally, Section Sec5 provides the conclusion. CEUR-WS's article template provides a consistent \LaTeX{} style for use across CEUR-WS publications, and incorporates accessibility and metadata-extraction functionality. This document will explain the major features of the document class. If you are new to publishing with CEUR-WS, this document is a valuable guide to the process of preparing your work for publication. The ``\verb|ceurart|'' document class can be used to prepare articles for any CEUR-WS publication, and for any stage of publication, from review to final ``camera-ready'' copy with {\itshape very} few changes to the source. This class depends on the following packages for its proper functioning: \item \verb|natbib.sty| for citation processing; \item \verb|geometry.sty| for margin settings; \item \verb|graphicx.sty| for graphics inclusion; \item \verb|hyperref.sty| optional package if hyperlinking is required in the document; \item \verb|fontawesome5.sty| optional package for bells and whistles. All the above packages are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages.
In this paper, we create and evaluate non-combined and combined models of Old Italian and Contemporary Italian data..} In light of the scarcity of manually annotated Old Italian data compared to the richness of Contemporary Italian data, the aim of this work is to determine whether combining data to train a combined model could lead to better accuracy in parsing, thereby facilitating the process for human annotators. We observe that combining Contemporary Italian and Old Italian data, even though it increases the data size of the model, does not lead to better LAS and UAS accuracy scores. This confirms, in line with other studies AUTHOR, that having an in-domain training set is preferable. Additionally, we notice that the model trained on OI data performs better on Contemporary Italian texts than the reverse (i.e. models trained on Contemporary data on OI texts). To explain these results, we investigate the syntactic complexity of each test set (OI, CI-ISDT, and CI-VIT). Specifically we evaluate sentence length, tree depth, lexical density and the type-token ratio. We notice that the tests differ only in the sentence length. We then proceed to calculate another parameter of syntactic complexity, namely non-projectivity. We discover that OI texts present a higher number of non-projective sentences. We hypothesize that the high level of non-projectivity could be connected to the genre of OI text, namely poetry. Thus far, the lack of UD treebanks for OI prose texts and for CI poetry texts have prevented us from investigating whether the high degree of non-projectivity observed in OI test (based on the Italian-Old treebank) is characteristic of the poetry genre or specific to OI. Such question will be left for further studies. Finally, we are currently working to increase the amount of manually annotated OI data, expanding both the range of authors and the genres of the texts considered. This will allow us to evaluate the model's performance both within and outside its domain (in terms of authorship and text typology), as well as to assess its potential applicability to other OI texts..
4
Syntax and Dependency Treebanks
616_2024
2,024
Federica Manzi, Leon Weber-Genzel, Barbara Plank
Fine-grained Sexism Detection in Italian Newspapers
ENG
3
2
1
Ludwig-Maximilians-University Munich, IT University of Copenhagen
2
1
1
3
Federica Manzi, Leon Weber-Genzel, Barbara Plank
0
0
Germany, Denmark
Munich, Copenhagen
In recent years, tasks revolving around hate speech detection have experienced a growing interest in the field of Natural Language Processing. Two main trends stand out in the context of sexism recognition: the focus on overt forms of sexism such as misogyny on social media and tackling the problem as a text classification task. The main objective of this work is to introduce a new approach to tackle sexism recognition as a sequence labelling task, operating on the token level rather than the document level. To achieve this goal, we introduce (i) the FGSDI (Fine-Grained Sexism Detection in Italian) corpus, containing Italian newspaper articles annotated with fine-grained linguistic markers of sexism, and (ii) a two-step pipeline that sequentially performs sexism detection on the sentence level and sexism classification on the token one. Our primary findings include that (i) tackling the task of sexism recognition as a sequence labelling task is possible, however, a large amount of labelled data is needed; (ii) leveraging few-shot learning for sexism detection proves to be an effective solution in scenarios where only a limited amount of data is available; (iii) the proposed pipeline approach allows for better results compared to the baseline by doubling the overall precision and achieving a better F1-score.
According to the Sapir-Whorf hypothesis AUTHOR AUTHOR, language shapes the way we think and interact with the world. It becomes therefore crucial to analyse our usage of linguistic expressions to reveal the intricate dynamics of societal norms, power structures, and cultural values embedded within our belief system. In this context, language can also become a vehicle for different forms of bias and discrimination, including sexism. Sexism in language encompasses a variety of phenomena, ranging from more subtle ones, nested within the grammar and semantics choices we make when talking about women, to more overt instances of misogyny, characterized by aggressiveness and violence against individuals based on their gender identity. \newline In recent years, sexism and misogyny detection and classification have witnessed a growing interest in Natural Language Processing (NLP), especially after the advent of transformers models AUTHOR, which unravelled new possibilities in nearly every NLP task. However, these efforts have mainly focused on misogyny and hate speech in general, tackled as text classification tasks on the document level, and specifically within the context of social media platforms such as Twitter and Facebook. \newline The main contributions of this paper are as follows. First, we concentrate on specific linguistic markers of sexism introducing more fine-grained classes than those usually considered in the sexism detection and classification tasks. %we propose the FGSDI (Fine-Grained Sexism Detection in Italian) corpus of Italian newspaper articles that we annotated following the guidelines in appendix sec:appendix_b. Inspired by linguistic work by Alma Sabatini AUTHOR, we propose a new annotation scheme and corpus for fine-grained sexism detection, resulting in the FGSDI (Fine-Grained Sexism Detection in Italian) corpus of Italian newspaper articles with the annotation guidelines released in appendix sec:appendix_b. Second, we address the recognition of linguistic markers of sexism as a token-level classification task, assigning a label to each token according to the fine-grained classes introduced before. This constitutes an innovation in that, to the best of our knowledge, no other work---in Italian or other languages---has tackled this task at such a granularity. \newline In particular, we compare two different approaches. The first one, which we used as baseline, consists of fine-tuning a RoBERTa AUTHOR model on the token classification task using whole texts as input. The second, novel one is a two-step pipeline approach inspired by AUTHOR which performs sexism detection and classification subsequently. The sexism detection task is tackled as binary classification applied at the sentence level. Sentences classified as potentially containing linguistic markers of sexism will then undergo the second step of the pipeline, which again involves classification on the token level.%This method constitutes a further novelty of the current work
This work aimed to bridge a gap in the research area of sexism detection and classification in Italian by the following contributions. First, we proposed the FGSDI (Fine-grained Sexism Detection in Italian) corpus for which we, importantly, provided new in-depth annotation guidelines. They are based on foundational linguistic work by AUTHOR and can be applied to other text genres in the future. Second, differently from previous research, we modelled the task of sexism classification as a sequence labelling instead of a text classification task. To achieve this goal, we compared two approaches, the baseline and the two-step pipeline, which allowed for a better overall performance on the task. \newline Working on enriching the corpus with new articles annotated with relevant labels would be the biggest contribution to bring this project forward. At the same time, having multiple annotators could enhance insights on the annotations and lower the risk of bias and subjectivity related to having a single annotator. Moreover, the modularity of the pipeline makes it open for further experimentation, especially in scenarios where more relevant data are available. One example could be using the multi-class classification setting of SetFit, which was excluded from the final pipeline since it performed slightly worse than the binary setting we ultimately used. Finally, further improvements can be made to the use of coreference resolution, which in many cases is not accurate in recognizing occurrences of the same referent in text. We would like to thank the reviewers for the feedback and encouraging words. This work is supported by the MaiNLP research lab at LMU Munich.
6
Sentiment, Emotion, Irony, Hate
617_2024
2,024
Lucia Busso, Claudia Roberta Combei
Written Goodbyes: How Genre and Sociolinguistic Factors Influence the Content and Style of Suicide Notes
ENG
2
2
1
Aston University, Università di Pavia
2
1
0
1
Lucia Busso
0
0
Italy, United Kingdom
Birmingham, Pavia
The study analyses a novel corpus of 76 freely available English authentic suicide notes (SNs) (letters and social media posts), spanning from 1902 to 2023. By using NLP and corpus linguistics tool, this research aims at decoding patterns of content and style in SNs. In particular, we explore variation in linguistic features in SNs across sociolinguistic factors (age, gender, addressee, time period) and between text type -- referred to as genre -- (letters vs. online posts). To this end, we use topic models, subjectivity analysis, and sentiment and emotion analysis. Results highlight how both discourse and emotion expression, show differences depending on genre, gender, age group and time period. We suggest a more nuanced approach to personalized prevention and intervention strategies based on insights from computer-assisted linguistic analysis.
This paper investigates the language of suicide notes, with the goal of uncovering patterns of discourse, topics, and emotional expression across various sociolinguistic factors and relationship dynamics, spanning over 100 years. A suicide note (SN) has been defined in the literature as "any available text by a suicide which was authored shortly before death" ( AUTHOR: 26). \par The importance of a detailed analysis of suicide notes has been acknowledged in the scholarly debate ( AUTHOR). In fact, SNs have been widely studied in linguistics, sociology, and psychology starting with the publication in 1959 of Osgood and Walker's seminal work ( AUTHOR). Since then, the language of SNs has been investigated mainly through Genre Analysis ( AUTHOR), with some scholars working with corpus methods ( AUTHOR). Lately, big corpora of SNs have been collected through the Web and used for computational analyses (inter alia AUTHOR). \par Research on SNs is naturally practical, being focused on suicide prevention ( AUTHOR), identification ( AUTHOR), and authenticity ( AUTHOR). For instance, the study by AUTHOR uses classification algorithms to help mental health professionals distinguish between genuine and elicited suicide notes. This -- the authors claim -- can in turn help developing a prediction strategy of repeated suicide attempts, as suicide notes offer valuable insights into specific personality states and mindsets. Similarly, AUTHOR suggests that analysing SNs may contribute to assessing the risk of repeated suicide attempts. \par Despite the area being well-researched, especially in forensic linguistics, current analyses of SNs present several shortcomings. Given the difficulty of accessing data, scholars have either used dubious source material (such as the letters published on the blog \href{https://theholydark.wordpress.com/2012/08/28/some-painfully-edifying-reading-suicide-notes/}{"The Holy Dark"}), or have reused and reanalysed SNs written by famous people (such as Virginia Woolf and Kurt Cobain, e.g., AUTHOR). Moreover, there is no study to date -- to the best of the authors' knowledge -- that analyses of SNs using text type, which we refer to as genre, or sociolinguistic factors (such as gender, age, addressee, or time period) as covariates. \par In the present paper, we set out to perform corpus and computational analyses on a novel dataset of authentic suicide notes. Specifically, we aim to explore whether and to what extent SNs style and content vary according to genre (letter vs. online post) and sociolinguistic factors (the victim's gender and age, as well as the addressee and time period of the SN). To this end, we employ Structural Topic Modelling ( AUTHOR) and keyword analysis, subjectivity analysis ( AUTHOR), and sentiment and emotion analysis ( AUTHOR).
This mixed-methods study analysed the content and style of 76 SNs written over the course of a century, using genre, several sociolinguistic factors, and relationship dynamics as covariates. First of all, three main topics emerged from our corpus, that we labelled as Explanations, Anguish, and Connectedness. Looking at the differences in topical prevalence between the two text types, we observed that online posts displayed less private feelings (e.g., anguish and pain) and greater polarized emotion words and swearwords. \par Subjectivity analysis revealed that SNs tended to be more subjective than objective, irrespective of the genre. Some differences based on addressees were identified in the corpus; for example, SNs directed toward close relationships (i.e., life-partners and family) showed higher subjectivity scores, suggesting a more profound and personal style, compared to those directed toward the broader (internet) public. \par As far as sentiment analysis is concerned, negative sentiment was dominant in the corpus (i.e. three times more frequent than neutral or positive sentiment), especially in online posts. Then, the analysis of emotions revealed that sadness was the main emotion in the corpus. This evident presence of sadness and negative sentiment reflects the complex emotional challenges and inner struggles that victims experienced at the time they wrote their SNs. Although sadness was the most common emotion in both letters and online posts, it occurred more frequently in the latter text type. Also, letters tended to convey more positive emotions (e.g., joy) more frequently than online posts. Finally, the analysis revealed that sadness was more common in the SNs written by female victims and by teenagers.\par All in all, our results reveal that the content, discourse, and emotional expression in SNs vary as a function of genre, sociolinguistic factors, and relationship dynamics. These differences uncover the need of taking into account specific social, demographic, and cultural variables when designing and implementing suicide prevention and intervention strategies. In this sense, we believe that corpus-based and NLP research on SNs can contribute to the improvement of these personalized strategies. The research presented in this paper was conducted while C. R. Combei benefited from support provided by the project "PON Ricerca e Innovazione 2014–2020 - Linea Innovazione (D.M. 1062/2021)". %%\href{https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-workshop-proceedings-ceur-ws-dot-org/pkfscdkgkhcq}{Overleaf %%template}.
6
Sentiment, Emotion, Irony, Hate
618_2024
2,024
Irene Siragusa, Roberto Pirrone
Unipa-GPT: a framework to assess open-source alternatives to Chat-GPT for Italian chat-bots
ENG
2
1
1
Università di Palermo, IT University of Copenhagen
2
1
0
1
Irene Siragusa
0
0
Italy, Denmark
Palermo, Copenhagen
This paper illustrates the implementation of Open Unipa-GPT, an open-source version of the Unipa-GPT chat-bot that leverages open-source Large Language Models for embeddings and text generation. The system relies on a Retrieval Augmented Generation approach, thus mitigating hallucination errors in the generation phase. A detailed comparison between different models is reported to illustrate their performance as regards embedding generation, retrieval, and text generation. In the last case, models were tested in a simple inference setup after a fine-tuning procedure. Experiments demonstrate that an open-source LLMs can be efficiently used for embedding generation, but none of the models does reach the performances obtained by closed models, such as \verb|gpt-3.5-turbo| in generating answers. Corpora and code are available on GitHub
The increasing development of bigger and bigger Large Language Models (LLM), reaching 70B parameters as for Meta LLMs (Llama 2 AUTHOR and \verb|Llama 3| AUTHOR) and more as for OpenAI ones (GPT-3 AUTHOR and GPT-4 AUTHOR and gpt-3.4 respectively}), requires a significant computational resources for training, fine-tuning or inference. OpenAI models are accessible only upon payment via OpenAI API and cannot be downloaded in any way, while the open-source models by Meta are available also in the 8B and 13B parameters versions, and they can either be fine-tuned via Parameter-Efficient Fine-Tuning techniques (PEFT) AUTHOR such as LoRA AUTHOR, or they can make direct inference using a 8-bit quantization AUTHOR keeping the computational resources relatively small. The availability of open-source small-size LLMs is crucial for developing Natural Language Process (NLP) applications that leverage a fine-tuning procedure over a specific domain or language, as for \verb|Anita| AUTHOR, an Italian 8B adaptation of \verb|Llama 3|. Nevertheless, GPT and Llama models cannot be considered as truly open-source since their training data set is not available and, as for GPT models, and also their actual architecture is not accessible. \verb|Minerva| AUTHOR model, on the other side, is an Italian and English LLM whose architecture, weights, and training data are accessible, but it can be considered as an exception in the LLM landscape. Starting from this premises, in this paper we propose Open Unipa-GPT, an open-source-based version of Unipa-GPT AUTHOR, that is a virtual assistant that uses a Retrieval Augmented Generation (RAG) approach AUTHOR to answer university-related questions issued by secondary school students. Open Unipa-GPT has been developed upon the same architecture of Unipa-GPT, and uses open-source LLMs for embedding generation, retrieval, and text generation. Our models are small, compared to the ones used in our original version, namely \verb|text-embedding-ada-002| and \verb|gpt-3.5-turbo| from OpenAI. The paper is arranged as follows: related works are reported in Section sota, while the architecture of Open Unipa-GPT is described in Section architecture, and an overview of the data set is provided in Section data set. Experiments and related results are reported in Section experiments. Finally, concluding remarks are drawn in Section conclusion.
In this paper we presented Open Unipa-GPT, a virtual assistant, which is based solely on open-source LLMs, and uses a RAG approach to answer Italian university-related questions from secondary school students. The main intent of the presented research was setting up a sort of framework to test open-source small size LLMs, with either moderate or no fine-tuning at all, to be used for generating the embeddings and/or as text generation front-end in a RAG set up. Our study led us to devise \verb|E5-mistral-7b-instruct| as a valuable open-source alternative to OpenAI's embeddings, while none of the considered models attain a generation performance comparable to \verb|gpt-3.5-turbo|, even after a fine-tuning procedure. The most promising Generation LLM, when plunged in our architecture, appears to be \verb|Anita-8B|, but it still shows some issues related to both the tokenization and the grammatical correctness of the output. We are currently working to deep exploration of different fine-tuning approaches along with the use of huge size open-source LLMs for text generation. \appendix details
1
Language Models
619_2024
2,024
Leonardo Ranaldi, Federico Ranaldi, Giulia Pucci, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto
The limits of Italian in Reasoning Tasks
ENG
5
2
0
University of Edinburgh, Università di Roma Tor Vergata, University of Aberdeen
3
1
0
2
Leonardo Ranaldi, Giulia Pucci
0
0
Italy, United Kingdom
Edinburgh, Rome, Aberdeen
Earlier works have been showing the efficacy of reasoning methods in eliciting step-wise reasoning of large language models (LLMs) by operating via in-context demonstrations. These strategies, exemplified by Chain-of-Thought (CoT) and Program-Aided Language Models (PAL), have been shown to reason well in monolingual contexts, primarily in English. However, there has been limited investigation into their capabilities in other languages, especially Italian. To gain a deeper understanding of the role of reasoning methods, we propose a multidimensional analysis tailored to Italian, focusing on arithmetic and symbolic reasoning tasks. Our findings indicate that the effectiveness of reasoning methods varies significantly beyond English. Expressly, CoT, which relies on natural language demonstrations, is limited to English. Conversely, the structured nature of PAL in-context demonstrations facilitates multilingual comprehension, enabling LLMs to generate programmatic answers in Italian as well. Finally, for a more complete overview, we observe that additional alignment methods do not improve downstream performances; in contrast, in some cases, they restrict the abilities of the original models.
Large language models (LLMs) are able to tackle tasks using prompts formed by structured patterns, a process known as in-context learning AUTHOR. This method allows the models to solve tasks without modifying their underlying parameters, relying solely on the provided inputs. The success of in-context learning has consequently heightened interest in analysing the factors that influence its effectiveness AUTHOR. Regarding reasoning methods, two effective strategies have emerged: Chain-of-Thought (CoT) AUTHOR and Program-Aided Language Models (PAL) AUTHOR. CoT decomposes a reasoning task into a series of intermediate steps using natural language, making it more general and human-understandable. In contrast, PAL employs Python functions to provide reasoning solutions, with its step-by-step programming approach leading to more systematic and structured reasoning. [t!] \centering \includegraphics[width=0.92\textwidth]{img/solo_ita new cross_lingual.pdf proposed in our analysis. We explore the impact of in-context demonstrations beyond English and the performances achieved by different LLMs (\S sec:models). *As indicated in the figure, we have translated two examples of prompts from Italian to English. Although earlier research primarily showcased the functioning of reasoning methods in English, recent studies have expanded to explore multilingual approaches. \citet{shi2022language} shown that the effectiveness of CoT rationales is limited to the languages most represented in LLMs pre-training data. \citet{huang2023languages} addressed the problem by proposing prompting mechanisms that translate the problem into English, while \citet{ranaldi-etal-2024-empowering} elicit multi- and cross-lingual alignments for enabling reasoning, or \citet{ranaldi-etal-2024-tree} self-correction mechanisms. The focus is limited to proposing performance solutions for a few languages, leaving behind the study of the role and the impacts of languages such as Italian. In this paper, we conduct an in-depth study to evaluate the role of reasoning methods in Italian. Taking previous work a step further, we study the operation of reasoning methods by analysing the effects of different types of reasoning methods on LLMs' Italian reasoning capabilities. This leads to the main research questions of this paper: (i) What role do natural language and structured in-context demonstrations play in reasoning planning in Italian? (ii) What are the impacts and limits of natural language demonstrations? (iii) Do Italian-aligned and Italian-centred models respond differently to reasoning methods? To answer these questions, we operate via CoT and PAL (shown in Table tab:example_Native_CoT and Table tab:example_Native_PAL). For multilingual CoT, we use natural language demonstrations both in English and in Italian following \citet{shi2022language}. Instead, for PAL, we propose a novel method by extending the original in English AUTHOR. We use reasoning tasks covering mathematical, commonsense reasoning, and natural language inference tasks in original versions (English) and adapted to Italian \href{https://github.com/lranaldii/italian_arithmetic_reasoning}{(resources available)}. These tasks are MGSM AUTHOR and MSVAMP AUTHOR, which consist of mathematical reasoning problems, and XCOPA AUTHOR, PAWS-X AUTHOR and XLNI AUTHOR which consist of commonsense reasoning and natural language inference. Finally, we select a range of different LLMs, we employ GPTs AUTHOR models for the results obtained in multilingual tasks, Phi-3 AUTHOR, and Mixtral AUTHOR for the results obtained in Italian benchmarks, different versions of Llama-2 and Llama-3 AUTHOR (adapted version for Italian, i.e., Llamantino-2 and -3 AUTHOR) and finally two Italian-centered LLMs for the improvements achieved by smaller-scale versions. We operate using the original models, and we propose aligned versions using state-of-the-art instruction-tuning methods based on synthetic data AUTHOR transferred for multilingual cases AUTHOR. The main contribution and findings of our paper are: \item Reasoning methods improve performance in Italian reasoning tasks as well as in English. However, although both methods bring tangible benefits, several limitations emerge in the natural language demonstrations employed in CoT. On the other side of the coin, we observe that the structured reasoning demonstrations (i.e., PAL) elicit the models to plan the solution in a more modularised way. Consequently, this benefits the final performance in both English and non-English tasks. \item We display the positive impact of structured in-context demonstrations on solution planning in Italian. We then demonstrate that since structured reasoning demonstrations are less ambiguous than natural language, they are more adaptable for math reasoning tasks and have a more noticeable impact in more articulate languages such as Italian. \item Finally, we show that the different LLMs analyzed in our contribution are able to understand problems in both English and Italian. However, performance in English is higher despite different approaches used to equate Italian and English proficiency. This reveals that the limitation is not derived from proficiency in a specific language but rather from the language's intrinsic difficulty To the best of our knowledge, this is the first work that investigates the impact of reasoning methods for the Italian and demonstrates how these strategies can consistently boost LLMs' performance, equipping them with the ability to generate step-wise explanatory reasoning for their predictions. To facilitate the reproduction, we share the data used at the following \href{https://github.com/lranaldii/italian_arithmetic_reasoning}{link}.
The advances of reasoning methods emerge beyond the English. Our analysis shows that properly elicited LLMs are capable of delivering reasoned answers in Italian as well. Indeed, by operating via CoT and PAL, we revealed that in-context demonstrations play a strategic role in improving performance in direct proportion to their quality and quantity. Our research highlights the need for a customised strategy for employing reasoning methods for LLMs in different languages. It supports the demand for a reasonable combination of model scale, reasoning technique, and strategic use of in-context learning to elicit the prospect of multilingual LLMs.
1
Language Models
620_2024
2,024
Chiara Di Bonaventura, Lucia Siciliani, Pierpaolo Basile, Albert Meroño-Peñuela, Barbara McGillivray
Is Explanation All You Need? An Expert Survey on LLM-generated Explanations for Abusive Language Detection
ENG
5
3
1
King's College London, Imperial College London, Università di Bari Aldo Moro
3
1
0
3
Chiara Di Bonaventura, Albert Meroño-Peñuela, Barbara McGillivray
0
0
Italy, United Kingdom
London, Bari
Explainable abusive language detection has proven to help both users and content moderators, and recent research has focused on prompting LLMs to generate explanations for why a specific text is hateful. Yet, understanding the alignment of these generated explanations with human expectations and judgements is far from being solved. In this paper, we design a before-and-after study recruiting AI experts to evaluate the usefulness and trustworthiness of LLM-generated explanations for abusive language detection tasks, investigating multiple LLMs and learning strategies. Our experiments show that expectations in terms of usefulness and trustworthiness of LLM-generated explanations are not met, as their ratings decrease by 47.78% and 64.32%, respectively, after treatment. Further, our results suggest caution in using LLMs for explanation generation of abusive language detection due to (i) their cultural bias, and (ii) difficulty in reliably evaluating them with empirical metrics. In light of our results, we provide three recommendations to use LLMs responsibly for explainable abusive language detection.
Explainability is a crucial open challenge in Natural Language Processing (NLP) research on abusive language AUTHOR as increasing models' complexity AUTHOR, models' intrinsic bias AUTHOR, and international regulations AUTHOR call for a shift in perspective from performance-based models to more transparent models. Moreover, recent studies have shown the benefits of explanations for users AUTHOR and content moderators AUTHOR on social media platforms. The former can benefit from receiving an explanation for why a certain post has been flagged or removed whereas the latter are shown to annotate toxic posts faster and solve doubtful annotations thanks to explanations. Several efforts have moved towards explainable abusive language detection in the past years, like the development of datasets containing rationales (i.e., the tokens in the text that suggest why the text is hateful) AUTHOR or implied statements (i.e., description of the implied meaning of the text) AUTHOR, and shared tasks on explainable hate speech detection AUTHOR, inter alia. With Large Language Models (LLMs) like FLAN-T5 AUTHOR showing remarkable performance across tasks and human-like text generation AUTHOR, recent studies have explored LLMs for explainable hate speech detection, wherein classification predictions are described through natural language explanations AUTHOR. For instance, AUTHOR used chain-of-thought prompting AUTHOR of LLMs to generate explanations for implicit hate speech detection. However, most of these studies rely on empirical metrics like BLEU AUTHOR to evaluate the generated explanations automatically. Consequently, the human perception and implications of these explanations remain understudied, as well as the extent to which empirical metrics approximate human judgements. AUTHOR recruited crowdworkers to evaluate the level of hatefulness in tweets and the quality of explanations generated by GPT-3. Instead, we conduct an expert survey investigating four LLMs and five learning strategies across multi-class abusive language detection tasks to answer the following questions: RQ1: How well do LLM-generated explanations for abusive language detection match human expectations? RQ2: How well do empirical metrics align with human judgements? RQ3: What makes LLM-generated explanations good, according to experts?
In this paper, we conducted a before-and-after study to understand human expectations and judgements of LLM-generated explanations for multi-class abusive language detection tasks. Contrarily to previous research AUTHOR, we investigated multiple LLMs and learning techniques, and we surveyed AI experts who are familiar with abusive language research instead of crowdworkers. We found that human expectations in terms of usefulness and trustworthiness of LLM-generated explanations are not met: after seeing these explanations, the usefulness and trustworthiness ratings decrease by 47.78% and 64.32%, respectively. Secondly, our results show that empirical metrics commonly used to evaluate textual explanations are highly volatile with respect to each other, even when they measure the same type of similarity (i.e., semantic vs. syntactic), and therefore pointing at the need of more reliable metrics for the empirical evaluation of textual explanations. In general, BERTScore and METEOR metrics exhibit the strongest correlation with human judgements. Lastly, our study provides evidence of the desiderata for LLM-generated explanations, suggesting that explanations should be coherent with human reasoning rather than model reasoning. Participants value the most textual explanations that are relevant and exhaustive to the text they refer to, while being logically and linguistically correct. Justifications for this preference lie on the fact that abusive language detection heavily relies on additional context and knowledge about slang and slurs, for which receiving an explanation is helpful to participants' understanding of the text. Future work should investigate whether this preference holds for other domains as well. In light of our findings, we conclude with three recommendations to use LLMs responsibly for explainable abusive language detection: (1) be aware of the cultural bias these models might exhibit when generating free-text explanations, which can further harm targeted groups; (2) if possible, instruction fine-tune LLMs for explanation generation of abusive language detection. This not only could ensure the generation of structured explanations as advised by previous research AUTHOR but it also returns the highest evaluation scores, both empirically and expert-wise, when using knowledge-guided prompts; (3) opt for a combination of empirical metrics to evaluate textual explanations when no human evaluation is possible, since no particular empirical metric seems to generalise across different learning techniques, models and datasets, making the ground-truth lie somewhere in between BERTScore (upper bound) and BLEU (lower bound). This work was supported by the UK Research and Innovation [grant number EP/S023356/1] in the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (www.safeandtrustedai.org); by the Trustworthy AI Research award by The Alan Turing Institute, supported by the British Embassy Rome and the UK Science \& Innovation Network; and by the PNRR project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU. \appendix
6
Sentiment, Emotion, Irony, Hate
621_2024
2,024
Olga Uryupina
Multimodal Online Manipulation: Empirical Analysis of Fact-Checking Reports
ENG
1
1
1
Università di Trento
1
0
0
0
0
0
0
Italy
Trento
This paper presents an in-depth exploratory quantitative study of the interaction between multimedia and textual components in online manipulative content. We discuss relations between content layers (such as proof or support) as well as unscrupulous techniques compromising visual content. The study is based on fakes reported and analyzed by PolitiFact and comprises documents from Facebook, Twitter and Instagram. We identify several pervasive phenomena currently, affecting the impact of manipulative content on the reader and the possible strategies for effective de-bunking actions, and discuss possible research directions.
Manipulative online content (fake news, propaganda, among others) is growing at an alarming rate, hindering our access to truthful and unbiased information and thus threatening principles of the democratic society. The problem has been addressed by professional journalists, who -- with the help of crowd-workers -- fight a never-ending battle to prevent information contamination. To enable a large-scale response to the misinformation threat, the AI community has invested a considerable effort into building competitive models for identifying non-transparent content, such as false claims or altered videos (deep fakes). However, we still lack a thorough understanding of the manipulative content and multiple aspects affecting its perception and impact on the reader. This paper aims at an in-depth analysis of one of such aspects, namely, the interaction between different (multimedia) layers of the manipulative message. More specifically, we study the semantics underlying the relation between multimedia and textual parts of the fake news. Our study is based on around 800 fakes from January till September 2022, as identified and analysed by PolitiFact.) is an independent journalistic agency and one of the most experienced fact-checking organizations, providing detailed analytics for non-transparent online content since 2007. Multimedia content, such as videos, reels, photos, screenshots or images is becoming increasingly popular in social media: it is an appealing and powerful way of expressing and/or enhancing one's message. Nevertheless, as a scientific community, we still have little understanding of the way the authors integrate multimedia into their content: most research so far has focused on a specific component and not on their interplay. Our study aims at identifying the role of multimedia part of manipulative messages. [] \centering \subfloat[Biden to teachers: “They're not somebody else's children. They're yours when you're in the classroom." (VIDEO)]{\includegraphics[width=60mm]{video-proof-cropped.png}} %% proof, cropped \hspace{1cm \subfloat[Now you know why there's suddenly "a formula shortage". The new age robber barrons have conveniently invested in some unholy breast milk made from human organs.]{\includegraphics[width=60mm]{screenshot-support.png}} %% miscaption --old headline \\ \subfloat[In honor of \#TaxDay, I remind you that Governor Evers wanted to increase your taxes by \1 billion just for heating your homes. Instead, Republicans cut your taxes by more than \2 billion. ] {\includegraphics[width=60mm]{infographics-paraphrase.png} \hspace{1cm \subfloat[Italian football agent Mino Raiola has died after suffering from an illness. RIP]{\includegraphics[width=60mm]{photo-illustration.png} %% illustration, ok Figure example:fig shows some examples from potential fakes analyzed by PolitiFact. We observe different relations between the text and the image. In particular, in (example:figa), the video is supposed to prove the claim by providing direct evidence, whereas in (example:figb), the image provides a support (appeal to authority). In (example:figc), the image is a visual paraphrase of the claim, enhancing its appeal but not providing extra proof, support or informational material. Finally, in (example:figd), the photo is an illustration that, while depicting the discussed person, does not aim at being relevant to the claim's veracity or impact. While understanding the relation between the image and the text is interesting from the scientific perspective, it is also a crucial prerequisite for efficient and meaningful fact-checking response. For example, if a supposed proof is a compromised photo, the response should highlight this fact (e.g., the video in (example:figa) has been cropped misrepresenting the quote, which should be highlighted in the fact-checking report). On the contrary, if a compromised photo is used as a mere illustration, the effective fact-checking report should focus on the textual claim per se. Another important angle is the issue with the multimedia part. In our example, the video in (example:figa) is cropped. On the contrary, (example:figb) represents an authentic screenshot, yet, it has been miscaptioned by the claim: an older content, irrelevant for the current events/topics, has been repurposed. The current paper focuses on these two aspects to analyze empirically the interplay between multimedia and textual components in fake news, as identified by PolitiFact. To this end, we reannotate the PolyFake dataset AUTHOR with fine-grained labels reflecting multimedia aspects.
We have presented an in-depth analysis of the interaction between textual and multimedia components of compromised social media documents. We have identified several high-impact issues, insufficiently studied by the community at the moment. These include the interaction between different modalities, the role of the multimedia part and its impact on selecting the successful fact-checking strategy, the difference between platforms and media types (current NLP studies predominantly focus on Twitter and images) and the importance of a more principled approach to content re-use. We hope that this study, motivated by human fact-checking expertise, can sparkle a meaningful discussion and improve automatic modeling. We thank the Autonomous Province of Trento for the financial support of our project via the AI@TN initiative. \appendix
15
Fact Checking and Fake News Detection
622_2024
2,024
Olga Uryupina
Life and Death of Fakes: on Data Persistence for Manipulative Social Media Content
ENG
1
1
1
Università di Trento
1
0
0
0
0
0
0
Italy
Trento
This work presents an in-depth investigation of the data decay for publicly fact-checked online content. We monitor compromised posts on major social media platforms (Facebook, Instagram, Twitter, TikTok) for one year, tracking the changes in their visibility and availability. We show that data persistence is an important issue for manipulative content, on a larger scale than previously reported for online content in general. Our findings also suggest a (much) higher data decay rate for the platforms suffering most from online disinformation, indicating an important area for data collection/preservation.
Manipulative online content is rapidly becoming a more and more pervasive issue for the modern society: by deliberately biasing our information flow, unscrupulous content writers can and do affect our emotional state, beliefs, reasoning and both online and offline behaviour. It is therefore not surprising that this has become a central issue for various stakeholders, from journalists and fact-checkers to NLP researchers both in academia and in the industry. Given the current rapid growth in data-driven studies of manipulative content, it is essential to have a reliable overview of data persistence issues in this specific domain: compromised content is often very dynamic and changes or becomes unavailable over time, raising reproducibility concerns, From the readers' perspective, the visibility of compromised content over time affects directly its impact: a removed or strongly downgraded document is unlikely to be read/recovered and cannot be used to promote or support other fakes. From the research and development perspective, data persistence is crucial for benchmarking, ensuring fair comparison between models as well as even simply providing them with high-quality real-life training and testing examples. Starting from already a decade ago, NLP benchmarking campaign studies AUTHOR report data persistence issues for online content, as used in various shared tasks, reporting around 10% of entries missing compared to the original dataset (gold standard). These shared tasks, however, are based almost exclusively on Twitter and do not focus specifically on compromised content. We believe that a large proportion of manipulative content is created on purpose by professional copywriters who might have different goals and motivations to keep their texts online (e.g., for click-bait purposes) or remove them (e.g., to reduce the reputation loss from being exposed as unreliable). Our work focuses specifically on the lifespan of fact-checked compromised content. We go beyond the naive binary present vs. removed view, studying more nuanced cases as well. In particular, we track compromised online posts over time for the appearance of explicit platform-specific reliability labels (e.g. "out of context"), obfuscation (the common situation when the online content is -- fully or partially -- rendered either very blurred or as a black/white box, with a message raising awareness of its limited reliability; this content, however, is still accessible to the user upon an extra click), and author-generated edits, as well as complete content removal. More specifically, we address the following research questions: \item[RQ1:] How persistent is the compromised content? How does its visibility and availability change over time? \item[RQ2:] What is the typical timeline for interaction between the content generators and fact-checkers? How -- if at all -- do content writers alter their posts after being exposed as problematic by fact checkers? \item[RQ3:] Are the trends different across platforms? To this end, we analyze two datasets (in English) of social media documents, fact-checked by PolitiFact.) is an independent journalistic agency and one of the most experienced fact-checking organizations, providing detailed analytics for non-transparent online content since 2007.
This paper aims at an in-depth analysis of data persistence for publicly fact-checked online content. After one year of monitoring thoroughly online posts fact-checked by PolitiFact, we have observed the following findings. First, the data persistence is a crucial and underrated issue for compromised content, with considerable decay rates. Second, the decay trends differ across platforms, with Facebook, TikTok and Instagram showing much less data persistance. Third, the decay starts immediately, with 12% of the compromised posts getting deleted at (or before) the publication of the PolitiFact report and 20% becoming unavailable within a week. This suggests an urgent need for a concentrated effort on timely collecting real-life fakes if we want to go beyond synthetic or simplistic datasets and train impactful fact-checking models. In the future, we want to analyze further aspects of the decay issues for the compromised content. Thus, we plan to add more fact-checking outlets beyond PolitiFact to see if there are any effects due to the report itself. Second, we plan to study in more detail the difference in online behaviour (content removal) between anonymous users, non-anonymous users and public figures. Finally, we plan to expand our research on interaction between content writers and fact-checkers ("editing"). We thank the Autonomous Province of Trento for the financial support of our project via the AI@TN initiative.
15
Fact Checking and Fake News Detection
623_2024
2,024
Michele Joshua Maggini, Erik Bran Marino, Pablo Gamallo Otero
Leveraging Advanced Prompting Strategies in Llama-8b for Enhanced Hyperpartisan News Detection
ENG
3
0
0
University of Santiago de Compostela, Universiti of Évora
2
1
1
3
Michele Joshua Maggini, Erik Bran Marino, Pablo Gamallo Otero
0
0
Spain, Portugal
Santiago de Compostela, Évora
This paper explores advanced prompting strategies for hyperpartisan news detection using the Llama3-8b-Instruct model, an open-source LLM developed by Meta AI. We evaluate zero-shot, few-shot, and Chain-of-Thought (CoT) techniques on two datasets: SemEval-2019 Task 4 and a headline-specific corpus. Collaborating with a political science expert, we incorporate domain-specific knowledge and structured reasoning steps into our prompts, particularly for the CoT approach. Our findings reveal that some prompting strategies work better than others, specifically on LLaMA, depending on the dataset and the task. This unexpected result challenges assumptions about ICL efficacy on classification tasks. We discuss the implications of these findings for In-Context Learning (ICL) in political text analysis and suggest directions for future research in leveraging large language models for nuanced content classification tasks.
%motivations The proliferation of hyperpartisan news content in digital media has become a significant challenge for modern societies, potentially undermining democratic processes and social cohesion. Hypepartisan news consists of politically polarized content presented through the usage of rhetorical bias. In the media landscape, news outlets disseminate information using proprietary websites and social networks. Each news outlet frames the narratives of the facts based on their political leaning, influencing the content with rhetorical biases, emotional purposes, ideology, and reporting the facts while omitting parts AUTHOR. To improve the virality of the news, even mainstream journalists adopted click-bait practices like eye-catching titles AUTHOR. Furthermore, the news not only stands for one opinion but could have an underlying political background that manifests through a specific vocabulary or assumptions against the opposite political leaning AUTHOR. This type of news could radicalize the voters because of their emotional language AUTHOR. When there is a massive usage of these techniques, we can consider news extremely partisan toward a particular political leaning. Although hyperpartisan news can share traits with misinformation and disinformation, it cannot be classified within these domains because the intent is not deceptive. For this reason, hyperpartisan news detection is closer to propaganda. Recent advancements in large language models (LLMs) have opened new avenues for tackling complex NLP tasks, including detecting nuanced linguistic phenomena such as bias and partisanship. Among these models, LLama3 AUTHOR, developed by Meta AI. This research makes use of the new LLM recently released by Meta AI, Llama3-8b-Instruct, fine-tuned and optimized for dialogue/chat use cases, to explore its application in the detection of both hyperpartisan news headlines and articles. LLMs can be prompted with instructions to perform classification tasks. Thus, we intend to use this open source model. In our case, by prompting the model with instructions and context, we are in the In-Context Learning (ICL) domain, a learning approach different from fine-tuning that does not require to update models' weights AUTHOR. The study aims to investigate the efficiency and compare the performances of the following ICL techniques: 0-shot with a general prompt and a specific prompt, few-shot with a different number of examples and Multi-task Guided CoT. We investigate how carefully crafted prompts with the help of a political expert can guide the model to identify subtle indicators of extreme political bias in news articles, leveraging the model's deep understanding of language and context. Our approach aims to overcome the limitations of traditional machine learning methods, which often struggle with the complex and evolving nature of partisan language. Furthermore, we can include definitions of the political phenomena of our interest in the prompt to further define the task and narrow the application domain. By focusing on ICL to provide context and background information, we seek to: \item Develop a flexible and adaptable system that can identify hyperpartisan content across various topics and writing styles without the need for extensive retraining; \item Reduce ambiguity and guide the model towards the desired outcome; \item Minimize the influence of biases in the training data, by incorporating diverse perspectives and examples. This research not only contributes to the field of automated content analysis but also aims to compare the efficiency of prompting techniques and to analyze if LLMs are valuable tools for classification task via ICL. The structure of the paper is as follows. In section RelatedWork we discuss the related literature; section ExperimentalSetting describes the experimental set-up we adopted and the methodology; section Results&Discussion covers the findings of our experiment comparing them based on the method used and highlights the limitation of our approach; section Conclusion reports the main findings and future research. The main contributions of the paper are the following: \item We evaluated the state-of-the-art model Llama3-8b-Instruct on two benchmark datasets in political domain; \item We assessed how well the model performs under different inference approaches: zero-shot learning, few-shot learning, and Multi-task Guided Chain-of-Thought reasoning \item Introduction of external in-domain knowledge in the prompt and segmentation of reasoning steps in the CoT considering the difficulty of the micro-tasks.
In this paper, we study the reliability of a SOTA model like Llama3-8b-Instruct for classification tasks in the political domain, namely to detect hyperpartisan articles and headlines comparing different prompting techniques. We cast the problem of the classification task using the generative capabilities of LLMs. Experiment results contradict the hypothesis that feeding the model with more context could lead to better performances AUTHOR. Indeed, in our case, the 0-shot approach was the most efficient. An interesting future direction would be building a new dataset of instructions to improve models' capability in zero-shot AUTHOR in identifying hyperpartisan news, inspired by datasets used for false information detection, such as Truthful-QA AUTHOR. Indeed, this dataset could be used to fine-tune generative models to enhance their performance. Additionally, we plan to explore more sophisticated prompting techniques in zero-shot and few-shot settings like prompt tuning in the political domain AUTHOR. Finally, we would like to investigate Retrieval-Augmented Generation (RAG) and implement neuro-symbolic strategies, to incorporate retrieved documents or knowledge bases into the process. By pursuing these research directions, we aim to develop more effective and reliable systems for detecting hyperpartisan news and promoting media literacy. This work is supported by the EUHORIZON2021 European Union's Horizon Europe research and innovation programme () the Marie Skłodowska-Curie Grant No.: 101073351. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Executive Agency (REA). Neither the European Union nor the granting authority can be held responsible for them. The authors have no relevant financial or non-financial interests to disclose.
15
Fact Checking and Fake News Detection
624_2024
2,024
Natalia Graziuso, Andrea Zugarini, Stefano Melacci
Task-Incremental Learning on Long Text Sequences
ENG
3
1
1
Università di Siena, expert.ai
2
0
0
0
0
1
Andrea Zugarini
Italy
Siena
The extraordinary results achieved by Large Language Models are paired with issues that are critical in real-world applications. The costs of inference and, in particular, training are extremely large, both in terms of time and computational resources, and they become prohibitive when working in dynamic environments, where data and tasks are progressively provided over time. The model must be able to adapt to new knowledge, new domains, new settings, without forgetting the previously learned skills. Retraining from scratch easily becomes too costly, thus Continual Learning strategies are of crucial importance. This is even more evident when data consist of ``long'' documents, that require several resources to be processed by modern neural models, leading to very long prompts. This paper investigates LLM-based Task-Incremental Learning in the case of tasks exploiting long sequences of text, as it is typical in summarization, question-answering on long documents, reviewing long contracts, and several others. We show how adapting the model by Task Arithmetic with LoRA, which was proposed for visual data, yields promising results also in the case of such ``long'' text data. To our best knowledge, this is the first work along this challenging direction. The outcome of the investigation of this paper is generic enough to represent an important starting point for further research in processing linguistic data in every language.
The quality of Language Models (LMs) has been rapidly improving in the last decade, showing outstanding skills when scaled to large data and networks AUTHOR, leading to the nowadays popular Large Language Models (LLMs). Solving more complex tasks with LLMs often requires processing ``long'' documents and articulated long instructions. However, handling lengthy prompts can be a significant obstacle for real-world applications, raising costs and resources required during both inference and, in particular, training. This issue can become critical when the LLM needs to be specialized to many different tasks, domains, and, more generally, when it is applied to dynamic settings that require multiple adaptations. For instance, in real-world applications, models need to be re-trained from time to time, as new data/tasks become available. In such scenarios, the need for Continual Learning (CL) AUTHOR strategies becomes imperative. From a very generic perspective, CL focuses on the development of algorithms capable of sequentially learning from a stream of data, while preserving what was learnt in past experiences, avoiding catastrophic forgetting AUTHOR. %Integrating both the handling of long texts and the development of multi-task models into a CL framework adds another layer of complexity to an already challenging landscape. In this work, motivated by the aforementioned issues, we study the problem of Continual Learning from ``long'' sequences of text, exploiting LLMs. We investigate several strategies based on LoRA AUTHOR to adapt an LLM to multiple tasks that are sequentially proposed over time. In particular, we first follow the route of training a single adapter in a sequential manner, then we explore Task Arithmetic to fuse multiple adapters trained independently AUTHOR. We consider the possibility of assigning different weights to each task, and we shed some light on what are the factors that contribute the most to catastrophic forgetting and to effective task adaptation. The outcomes of such an investigation reveals that: (1) there is limited sensitivity to task-order, i.e., regardless of the sequence in which tasks are presented, the overall average performance remains relatively stable, a property that, to our best knowledge, was never evaluated in the case of tasks composed of long documents; (2) despite its simplicity, Task Arithmetic demonstrates effectiveness in addressing forgetting phenomena when learning from long texts, %, and a very small fine-tuning on few examples from all the tasks, collected in a memory buffer, greatly improves the overall results, strongly reducing the gap from multiple models independently adapted to the task data. Moreover, (3) we are the first to evaluate a recently proposed benchmark (SCROLLS AUTHOR) in a CL setting, offering reference results for further activity in processing long sequences of text. We remark that while our experiments are based on data in English language, the generic issues we explore about handling long sequences of text are intrinsically shared by every language. %Influence of Output Style on Performance: We observed a notable influence of output style on performance outcomes, as illustrated by the comparison between Natural Language Inference and Summarization tasks. The distinct stylistic requirements of these tasks have a discernible effect on performance metrics, highlighting the importance of considering output style in task evaluation and analysis. %may be glued with introduction In the last few years, a variety of approaches were proposed by the scientific community in the context of CL (see AUTHOR and references therein). The main goal is the one of learning from newly provided information, with models that are capable of acquiring new knowledge without forgetting the previously learned one, and, more importantly, without storing the full dataset and retraining from scratch every time AUTHOR. Several efforts are dedicated to the case of lifelong Reinforcement Learning AUTHOR and of Supervised Learning AUTHOR, distinguishing among scenarios and categories of approaches AUTHOR, ranging from parameter isolation, regularization methods, and replays AUTHOR. Unsupervised or Self-Supervised Learning approaches are also becoming popular AUTHOR, and the case of adaptation of pre-trained backbones AUTHOR. Of course, neural models for processing language are a subject of study in the context of CL AUTHOR. We mention the case of language modeling in Lamol AUTHOR, which is trained to concurrently solve a task and mimic training examples, thereby preserving the distribution of previous tasks. Sun et al. AUTHOR introduce Distill and Replay, %which is trained on a sequence of tasks similarly to the approach applied in Lamol. It which learns to solve the task, to generate training examples formatted as context-question-answer, and to distill knowledge from a model trained on the previous task(s). %Before fitting the model to the next task, pseudo samples are generated to represent the data distribution of previous tasks. Differently, Reasoning-augmented Continual Learning AUTHOR focuses on creating reasoning pathways to preserve and improve LLMs’ reasoning abilities and information transfer. %Meaning that the LLM must provide the textual reasoning followed to compute the answer along with the answer itself. Surprisingly, it offers performance comparable to replay-based techniques. Together with works that learn new models from scratch, several approaches devise fine-tuning strategies for pre-trained Transformers in language processing, that turn out to be efficiently adaptable to a downstream task by learning only a small number of task-specific parameters. It is the case of models that tune the input prompt AUTHOR or of generic Adapters AUTHOR, such as the popular LoRA AUTHOR, which introduces new weight matrices, parametrized by the product of low-rank ones. Evaluating these models with long contexts AUTHOR is not frequent in the scientific literature, especially in the case in which multiple fine-tunings are sequentially applied, typical of CL, which is the main focus of this paper. In particular, LoRA and Task Arithmetic AUTHOR has been jointly studied to handle CL problems in vision AUTHOR, that is what this paper extends to the case of language and long sequences. We also mention works that focus instruction-based model for CL, such as ConTinTin AUTHOR, where each task is modelled by a specific instruction that directly defines the target concept along with a few instances that illustrate. %The authors start from NATURAL INSTRUCTIONS benchmark AUTHOR, whose instructions consist of 7 fields: Title, Definition, Caution, Prompt, Things to avoid and Examples, and they restructure the datasets in it in order to process them as a stream of tasks. Briefly, the system is required to perform a new task, generating its expected outputs, from its instruction, transfer the knowledge acquired from old tasks to help solve the new ones and simultaneously improve the performance on the upstream tasks. Scialom et al. AUTHOR and Luo et al. AUTHOR investigate natural language instructions paired with memory buffers and replays. %Remarkably, this approach allows to maintain almost 100% of the initial performance and turns out to be task order invariant, even though storing a buffer of data is still recognized as one of the limitations. Furthermore, their study concludes to exclude that CL adaptability actually emerges from the scale of the model, but rather from an intensive pre-training stage. To state it, they evaluate their model, CT0, not only on new and old tasks, but also on new instructions generated through instruction composition. In this scenario, they evaluate zero-shot performance and CT0 seems to succed in understanding instructions beyond those it was exposed to during training.
We investigated Large Language Models in progressively learning from tasks involving long sequences of text. A pre-trained model was paired with one or more adapters (LoRA), and we analyzed the role of Task Arithmetic, showing that it yields performances that are not far from the ones of multiple models independently trained to solve each task. Our results suggests a viable road to mitigate the need of large computational resources when learning from tasks based on ``long'' documents. While we exploited data in English language, the experiences of this paper can be interpreted as generic attempts to leverage long sequences in Continual Learning, in a sense going beyond the language barrier. Future work will consider schemes to automatically tune the Task Arithmetic AUTHOR. The work was partially funded by: % \item ``ReSpiRA - REplicabilità, SPIegabilità e Ragionamento'', a project financed by FAIR, Affiliated to spoke no. 2, falling within the PNRR MUR programme, Mission 4, Component 2, Investment 1.3, D.D. No. 341 of 03/15/2022, Project PE0000013, CUP B43D22000900004 }; % \item ``enRichMyData - Enabling Data Enrichment Pipelines for AI-driven Business Products and Services'', an Horizon Europe (HE) project, grant agreement ID: 101070284 }.
1
Language Models
625_2024
2,024
Francesca Chiusaroli, Federico Sangati, Johanna Monti, Maria Laura Pierucci, Tiberio Uricchio
Emojilingo: Harnessing AI to Translate Words into Emojis
ENG
5
3
1
Università di Macerata, Okinawa Institute of Science and Technology, Università di Napoli L'Orientale, Università di Pisa
4
1
0
1
Federico Sangati
0
0
Italy, Japan
Macerata, Naha, Naples, Pisa
This paper presents an AI experiment of translation into emoji conducted on a glossary from Dante Alighieri’s Divine Comedy. The experiment is part of a project aiming to build up an automated emoji-based pivot language providing an interlingua as a tool for linguistic simplification, accessibility, and international communication: Emojilingo URL. The present test involves human (Emojitaliano) and machine (Chat-GPT) translations in a comparative analysis in order to devise an automated integrated model highlighting emojis' expressive ability in transferring senses, clarifying semantic obscurities and ambiguities, and simplifying language. A first evaluation highlights Chat-GPT's ability to deal with a classic archaic literary vocabulary, also raising issues on managing criteria for better grasping the meanings and forms and about the multicultural extent of content transfer.
Consisting today in 3,782 icons, regularly updated by Unicode Consortium, URL the emoji international catalogue contains signs for \emoji{grinning-face} facial expressions, \emoji{person-shrugging} human gestures, \emoji{person-cartwheeling} people activities, \emoji{cook} jobs, \emoji{potted-plant} plants, \emoji{squid} animals, \emoji{spaghetti} food, \emoji{light-bulb} objects, \emoji{train} symbols of travel, \emoji{tokyo-tower} places, \emoji{rainbow-flag} flags, \emoji{input-numbers} numbers, and \emoji{red-square} geometrical forms. While the visual content seems to be able to provide an encyclopedic list with universal significance, ideally capable of conveying language-independent meanings, the interpretation of emojis is, on the contrary, highly arbitrary. They are strongly subject to ambiguities and variations due to linguistic, cultural, and personal specificities. The use of emoji has considerably increased over time, and besides complementing written texts in online messages and posts as a nice means to express feelings and emotional statuses, emojis are also used to completely replace verbal language statements AUTHOR. Experiments have been carried out to explore the feasibility of using emojis as language to convey meanings through emoji-only translations. Notable examples include the popular Emoji Dick project, the translation into emoji of Moby Dick AHUTOR, or Wonderland AHUTOR, an emoji poster created in 2014 to reproduce the full story of Lewis Carroll’s Alice in Wonderland. These earliest experiments, however, lack codification and, as such, cannot be considered as a language, that is, a shared system in the Saussurean sense AHUTOR. The first translation, in fact, was crowdsourced in a free and creative way, while the second one was an individual and literal translation experiment from English. A concrete attempt to create a truly codified emoji language can be represented by Emojitaliano AHUTOR. Emojitaliano is an emoji code originated from a crowdsourcing experiment initiated by a social community, specifically created to share a common emoji language able to counteract the natural polysemy of emojis. Born with the translation of Collodi’s Pinocchio, The Story of a Puppet AHUTOR, the structure and glossary of Emojitaliano have been afterwards usefully reapplied for the translation of texts of different genres such as the technical declaratory prose of the Italian Constitution , the Manifesto of non-hostile communication, the narrative prose of classic moral tales (i.e., The Wolf and the Lamb, in figure pecora, Giacomo Leopardi’s lyrical poem L’infinito. Emojitaliano is based on the assessment of conventional meanings and syntax, capable of guaranteeing the sharing of sense by means of intersemiotic translation, beyond subjective interpretations. Emojitaliano provides a grammatical structure and a shared vocabulary which can be expanded and re-shared with each new translation AHUTOR. Recent experiments have opened new research horizons in evaluating the capability of large language models (LLMs) to translate words or text into emojis. This is predicated on the assumption that, given LLMs are trained on extensive corpora sourced from the internet, they have been exposed to emojis and are able to grasp the semantics of emojis AHUTOR. Recently, Text2Emoji AHUTOR was proposed as an automatic translator, based on a large text-emoji parallel corpus, created by prompting the LLM, Chat-GPT (OpenAI, 2023) and EmojiLM, a sequence-to-sequence model specialised in the text-emoji bidirectional translation. Another translation experiment involving emojis, conducted by AHUTOR is Emojinize. This experiment leverages the power of LLMs to translate text by considering both prior and subsequent contexts, which differs from next-token prediction. Emojinize disambiguates synonyms based on context, unlike a static lookup table, and harnesses the expressive power of combining multiple emojis. Among the experiments, a first attempt using Chat-GPT to learn the Emojitaliano grammar was also carried out in 2023 by the Emojitaliano research group. Assuming the fundamental role of a conventional syntax as a basis for each shared code AHUTOR, the aim was to verify the ability of LLMs to learn and reapply the Emojitaliano grammar rules to produce translations of Pinocchio on its own AHUTOR. In this paper we present a follow-up experiment of automatic translation into emoji, focused on special vocabulary. Chat-GPT's translations of an authorial lexicon have been tested and then compared to the corresponding human solutions.The purpose is to test LLMs capabilities in autonomously rendering complex vocabulary, in the horizon of building a translation tool into emoji as a means of language simplification: the general project and the conlang itself are named Emojilingo and available online on URL. The paper is organised as follows: section 2 introduces the Emojilingo project Parole di Dante, the subject being translations in emoji of 365 words (Parole di Dante) from Dante’s poem Divine Comedy. Section 3 presents the AI translation experiment carried out with two versions of Chat-GPT (3.5 and 4) AHUTOR on the 365 Dante's words, with a focus on the method and descriptions of some examples. Section 4 provides an evaluation of the results, also obtained through AI models and through a similarity matrix, and the closing section includes conclusions and ideas on future work.
In this paper we presented a translation experiment into emojis using two versions of Chat-GPT, to compare them with a human version, already available, realized within the framework of the Emojitaliano experience. The present project focuses on an integrated translation program, that combines both human (Emojitaliano) and automated approaches, as a basis for a constructed emoji-based pivot language: Emojilingo. Using a zero-shot prompting approach, both Chat-GPT versions (3.5 and 4) provided emoji translations for 365 words extracted from Dante's Comedy, along with explanations for the their own translation solutions. We also had Chat-GPT evaluate the three different translations produced within the Emojitaliano project, alongside those produced by Chat-GPT. The present experiment substantially succeeds in confirming AI easiness and ability to use emojis to convey linguistic meanings, also managing special and archaic vocabulary. We in fact tested the machine's ability to handle denotative and connotative issues in the different translation choices, i.e. the translation solutions can be multi-faceted, each one catching some of the many semantic features underlying words. In this sense most solutions may be acceptable, such as to demonstrate the versatility of the emoji code to convey the senses. Within this broad faculty of choice, however, some options seem quite critical, due to the dissimilarity of cultural values expressed by the languages, and by the emojis themselves. That is, a main consequence of using AI for translation, also in emojis, is the reaffirmation of the crucial challenge in international translation: the need for careful attention to specific cultural dimensions during localization AUTHOR. Cultural values underlie texts, words and languages, as, for example, a `pig' is an `occidental' symbol for negative concepts as `dirt' and `gluttony' (as in `lurco'), while the animal has a totemic or sacred value elsewhere; likewise, colors, or gestures, take on cultural values according to societies and cannot be accorded univocal international meanings. The choice of an icon as and international multilingual sign cannot override cultural peculiarities. Finally, cultural vocabularies may vary on the basis of literary contexts and textual genres, often conveying suggestions related to signifiers that are now lost. Given the conservative structure of poetical language, emoji translations may therefore need to move beyond the broadness of interlingua to fully convey meanings by reproducing linguistic signs `verbatim' (es. `intuarsi' \emoji{person-standing}\emoji{magic-wand}\emoji{backhand-index-pointing-up}): that is, the literal solution, usually ruled out from the perspective of an international semantic code, becomes substantial to recover the cultural dimension of a literary text AUTHOR. Special care is therefore required in selecting corresponding matches in emoji so that they do not conflict with reception in different countries and societies and so that they do succeed in reaching the core content of the original, which is the main purpose of `the emojilingua'. future research will always need a human evaluation of automated outcomes, carried on by a team with extensive expertise in cross-cultural perspectives, and with a deep understanding of cultural values of emojis. This will help to limit unrestricted creativity and ensure a wide common comprehension of Emojilingo, and its highest exportability.
6
Sentiment, Emotion, Irony, Hate
626_2024
2,024
Bernardo Magnini, Alessandro Dal Pozzo, Roberto Zanoli
Understanding High-complexity Technical and Regulatory Documents with State-of-the-Art Models: A Pilot Study
ENG
3
0
0
Fondazione Bruno Kessler, Rete Ferroviaria Italiana S.p.A
2
0
0
0
0
1
Alessandro Dal Pozzo
Italy
Trento, Rome
We explore the potential of state-of-the-art Large Language Models (LLMs) to reason on the content of high-complexity documents written in Italian. We focus on both technical documents (e.g., describing civil engineering works) and regulatory documents (e.g., describing procedures). While civil engineering documents contain crucial information that supports critical decision-making in construction, transportation and infrastructure projects, procedural documents outline essential guidelines and protocols that ensure efficient operations, adherence to safety standards and effective incident management. Although LLMs offer a promising solution for automating the extraction and comprehension of high-complexity documents, potentially transforming our interaction with technical information, LLMs may encounter significant challenges when processing such documents due to their complex structure, specialized terminology and strong reliance on graphical and visual elements. Moreover, LLMs are known to sometimes produce unexpected or incorrect analyses, a phenomenon referred to as hallucination. The goal of the paper is to conduct an assessment of LLM capacities along several dimensions, including the format of the document (i.e., selectable text PDFs versus scanned OCR PDFs), the structure of the documents (e.g., number of pages, date of the document), the graphical elements (e.g., tables, graphs, photos), the interpretation of text portions (e.g., make a summary), and the need of external knowledge (e.g., to interpret a mathematical expressions). To run the assessment, we took advantage of GPT-4omni, a large multi-modal model pre-trained on a variety of different data. Our findings suggest that there is great potential for real-world applications for high-complexity documents, although LLMs may still be susceptible to produce misleading information.
Technical documents employed in civil engineering contain information essential for planning, designing and constructing structures that need to ensure safety and compliance with regulations. As an example, such high-complexity documents provide technical guidelines for managing the development of roads, bridges and other transport networks. Additionally, these documents are fundamental for public infrastructure projects, ensuring they serve the community effectively and safely. These documents are highly complex, particularly due to their multi-modal nature, where textual content is mixed with several graphical content. The written content can vary from simple explanations to very detailed technical instructions, often referring to specialized regulations. The visual elements typically include tables with numbers, math formulas and detailed drawings of engineering stuff, as well as photos from natural environments and rendering of a construction once realized. In addition, documents are available either in PDF format as scanned documents, or as PDFs processed with Optical Character Recognition (OCR) software, introducing an additional layer of complexity due to potential variations in text recognition quality. Finally, civil engineering technical documents are typically long, easily reaching hundreds of pages. Figure~fig:figure shows one of the many visual elements occurring in the technical documents (civil engineering projects in Italian) considered in this study. [htbp] \centering \includegraphics[width=\linewidth]{images/figure.png Similarly to technical documents, regulatory documents play an equally important role across the same sectors, as they outline the steps for managing incidents, supervising safety procedures and ensuring regulatory compliance. For example, railway procedural documents contain comprehensive instructions on handling incidents and supervising safety measures, introducing additional complexity through procedural frameworks. Although procedural documents lack the visual complexity typical of technical projects, such as the presence of figures, tables and graphs, they are dense with text, focusing on legal and procedural details. The paper investigates how state-of-the-art generative models are able to reason on the content of high-complexity technical and regulatory documents written in Italian. As generative models, both LLMs and Large Multimodal Models (LMMs), are rapidly becoming more and more powerful, our research questions aim at assessing their ability to extract and interpret key information, this way reducing the need for manual reviews by human experts. To this end, we have defined a simple question-answer evaluation framework tailored to technical and regulatory documents. As an example, we ask the model questions such as Provide a general summary of the technical specifications in the document and then we manually check the model answer. We also consider the potential for LLMs/LMMs to generate content that is not grounded to the document, an issue often referred to as model confabulations or hallucinations AUTHOR. To assess confabulations we included ``trap" questions mentioning non-existing objects in the document. Finally, the assessment considers both selectable text PDFs, which are extractable and editable, and scanned OCR PDFs, where text is derived from scanning or from OCR. A state-of-the-art survey on articles published between 2000 and 2021, focusing on the applications of Text Mining in the construction industry was presented in AUTHOR. AUTHOR and AUTHOR explored NLP application and development in construction. Various machine learning and deep learning-based NLP techniques, and their applications in construction research, are documented in AUTHOR. There are several potential real-world applications of LLMs in supporting and enhancing various sectors. Construction firms can exploit LLMs to assist in reviewing technical documents for safety regulations and building codes, helping simplifying compliance checks. Additionally, organizations with large document archives can leverage LLMs to identify potential inconsistencies or conflicts in procedures, providing valuable insights for further human review and ensuring adherence to unified operational protocols.
We showed that GPT-4omni has a high potential for analyzing technical and regulatory documents. However, the model tends to make factual errors, to generate inaccurate details and to provide misleading answers supported by technical explanations. These observations highlight potential limitations when handling long and complex documents, and further research is needed to better understand and address these challenges. Our study has some limitations that should be considered. Limited Sample Size. The evaluation was based on a dataset of four documents, which may not be representative of the broader range of technical documents. Query Format. We employed a multi-question prompt format, grouping multiple questions within a single prompt. We plan to explore an approach where each question is presented as an individual prompt. Examining Positional Bias. There is a possibility that the answer location within the document (beginning, middle, or end) might affect the model's performance. Contextual Sensitivity Analysis. The amount of context provided could influence GPT in answering questions related to specific document elements. We plan to systematically compare the model accuracy when presented with the entire document versus just the relevant page containing the answer. Playground vs. API Analysis. We primarily used the OpenAI API for evaluation. It would be valuable to explore whether analyzing documents through OpenAI's Playground interface yields similar results.
20
In-domain IR and IE
627_2024
2,024
Vivi Nastase, Giuseppe Samo, Chunyang Jiang, Paola Merlo
Exploring Italian sentence embeddings properties through multi-tasking
ENG
4
2
1
Idiap Research Institute, University of Geneva
2
1
1
4
Vivi Nastase, Giuseppe Samo, Chunyang Jiang, Paola Merlo
0
0
Switzerland
Geneva, Martigny
We investigate to what degree existing LLMs encode abstract linguistic information in Italian in a multi-task setting. We exploit curated synthetic data on a large scale -- several Blackbird Language Matrices (BLMs) problems in Italian -- and use them to study how sentence representations built using pre-trained language models encode specific syntactic and semantic information. We use a two-level architecture to model separately a compression of the sentence embeddings into a representation that contains relevant information for a task, and a BLM task. We then investigate whether we can obtain compressed sentence representations that encode syntactic and semantic information relevant to several BLM tasks. While we expected that the sentence structure -- in terms of sequence of phrases/chunks -- and chunk properties could be shared across tasks, performance and error analysis show that the clues for the different tasks are encoded in different manners in the sentence embeddings, suggesting that abstract linguistic notions such as constituents or thematic roles does not seem to be present in the pretrained sentence embeddings.
Driven by increasing computational scale and progress in deep learning techniques, NLP models can rival human capabilities on established benchmarks. New benchmarks, then, that capture deeper levels of language understanding must be created and analysed AUTHOR. Blackbird's Language Matrices (BLM) AUTHOR is a recent task inspired by visual tests of analytic intelligence (Raven Progressive Matrices/RPMs, AUTHOR). The BLM tasks have cast light on whether the correct predictions in previously studied linguistic problems, e.g. number agreement or verb alternations, stem from sentence embeddings that encode deeper linguistic information, such as syntactic structure and semantic properties of phrases AUTHOR. We found that higher-level information -- syntactic structure and argument structure -- can be assembled from the information encoded in the sentence embeddings. This, however, may not be due to a deeper understanding of such information encoded by LLMs, but rather because of useful surface indicators AUTHOR. In this paper, we adopt BLMs to investigate whether current pretrained models encode abstract linguistic notions, such as constituents, and are able to do so in a manner that comprises both functional elements, such as pronouns, demonstratives and lexical elements, such as nominal constituents. We concentrate on Italian, and study several grammatical problems whose solutions can theoretically help each other, in a multi-task setting. We adopt a two-level architecture developed specifically to model what we know about how humans solve puzzles similar to BLMsAUTHOR. Level 1 aims to obtain compressed sentence representations that capture information about constituents and their properties; level 2 uses the compressed sentence representations to solve a BLM problem. This architecture provides a tool to study how LLMs encode different types of syntactic and semantic information.We make two contributions: (i) an initial core BLM dataset for Italian that covers linguistic problems of different nature; (ii) single and multi-task experiments that provide new insights into the information encoded by LLMs. The datasets are available at URL and the code at URL.
In this paper, we have presented curated synthetic datasets of Italian on two linguistic phenomena of an heterogeneous nature, such as agreement and verbal transitive/intransitive alternation, embedded in the BLM task. The results on the performance and the error analysis of a tailored two-level architecture have shown that multi-task environments do not help, or help only marginally in high-performance settings, suggesting that abstract linguistic notions, such as constituents or thematic roles do not seem to be present in the learning process. Current work is developing new analyses and architectures to probe further in the encoding of information in sentence embeddings and creating new BLM problems across various languages and linguistic phenomena.
1
Language Models
628_2024
2,024
Vivi Nastase, Chunyang Jiang, Giuseppe Samo, Paola Merlo
Exploring syntactic information in sentence embeddings through multilingual subject-verb agreement
ENG
4
2
1
Idiap Research Institute, University of Geneva
2
1
1
4
Vivi Nastase, Giuseppe Samo, Chunyang Jiang, Paola Merlo
0
0
Switzerland
Geneva, Martigny
In this paper, our goal is to investigate to what degree multilingual pretrained language models capture cross-linguistically valid abstract linguistic representations. We take the approach of developing curated synthetic data on a large scale, with specific properties, and using them to study sentence representations built using pretrained language models. We use a new multiple-choice task and datasets, Blackbird Language Matrices (BLMs), to focus on a specific grammatical structural phenomenon -- subject-verb agreement across a variety of sentence structures -- in several languages. Finding a solution to this task requires a system detecting complex linguistic patterns and paradigms in text representations. Using a two-level architecture that solves the problem in two steps -- detect syntactic objects and their properties in individual sentences, and find patterns across an input sequence of sentences -- we show that despite having been trained on multilingual texts in a consistent manner, multilingual pretrained language models have language-specific differences, and syntactic structure is not shared, even across closely related languages.
Large language models, trained on huge amount of texts, have reached a level of performance that rivals human capabilities on a range of established benchmarks AUTHOR. Despite high performance on high-level language processing tasks, it is not yet clear what kind of information these language models encode, and how. For example, transformer-based pretrained models have shown excellent performance in tasks that seem to require that the model encodes syntactic information AUTHOR. All the knowledge that the LLMs encode comes from unstructured texts and the shallow regularities they are very good at detecting, and which they are able to leverage into information that correlates to higher structures in language. Most notably, AUTHOR have shown that from the unstructured textual input, BERT AUTHOR is able to infer POS, structural, entity-related, syntactic and semantic information at successively higher layers of the architecture, mirroring the classical NLP pipeline AUTHOR. We ask: How is this information encoded in the output layer of the model, i.e. the embeddings? Does it rely on surface information -- such as inflections, function words -- and is assembled on the demands of the task/probes AUTHOR, or does it indeed reflect something deeper that the language model has assembled through the progressive transformation of the input through its many layers? To investigate this question, we use a seemingly simple task -- subject-verb agreement. Subject-verb agreement is often used to test the syntactic abilities of deep neural networks AUTHOR, because, while apparently simple and linear, it is in fact structurally, and theoretically, complex, and requires connecting the subject and the verb across arbitrarily long or complex structural distance. It has an added useful dimension -- it relies on syntactic structure and grammatical number information that many languages share. In previous work we have shown that simple structural information -- the chunk structure of a sentence -- which can be leveraged to determine subject-verb agreement, or to contribute towards more semantic tasks, can be detected in the sentence embeddings obtained from a pre-trained model AUTHOR. This result, though, does not cast light on whether the discovered structure is deeper and more abstract, or it is rather just a reflection of surface indicators, such as function words or morphological markers. To tease apart these two options, we set up an experiment covering four languages: English, French, Italian and Romanian. These languages, while different, have shared properties that make sharing of syntactic structure a reasonable expectation, if the pretrained multilingual model does indeed discover and encode syntactic structure. We use parallel datasets in the four languages, built by (approximately) translating the BLM-AgrF dataset AUTHOR, a multiple-choice linguistic test inspired from the Raven Progressive Matrices visual intelligence test, previously used to explore subject-verb agreement in French. Our work offers two contributions: (i) four parallel datasets -- on English, French, Italian and Romanian, focused on subject-verb agreement; (ii) cross-lingual and multilingual testing of a multilingual pretrained model, to explore the degree to which syntactic structure information is shared across different languages. Our cross-lingual and multilingual experiments show poor transfer across languages, even those most related, like Italian and French. This result indicates that pretrained models encode syntactic information based on shallow and language-specific clues, from which they are not yet able to take the step towards abstracting grammatical structure. The datasets are available at and the code at .
We have aimed to add some evidence to the question How do state-of-the-art systems \llknow\gg what they \llknow\gg? AUTHOR by projecting the subject-verb agreement problem in a multilingual space. We chose languages that share syntactic structures, and have particular differences that can provide clues about whether the models learned rely on shallower indicators, or the pretrained models encode deeper knowledge. Our experiments show that pretrained language models do not encode abstract syntactic structures, but rather this information is assembled "upon request" -- by the probe or task -- based on language-specific indicators. %PGM This is too strong, it might be that the chunk information is in middle levels and it gets merged with other types of info before getting to the output. In fact, I bet that is what is going on. Understanding how information is encoded in large language models can help determine the next necessary step towards making language models truly deep. \paragraph{Acknowledgments We gratefully acknowledge the partial support of this work by the Swiss National Science Foundation, through grant SNF Advanced grant TMAG-1\_209426 to PM. \onecolumn \appendix
1
Language Models
629_2024
2,024
Eleonora Litta, Marco Carlo Passarotti, Paolo Brasolin, Giovanni Moretti, Francesco Mambrini, Valerio Basile, Andrea Di Fabio, Cristina Bosco
The Lemma Bank of the LiITA Knowledge Base of Interoperable Resources for Italian
ENG
8
2
1
Università Cattolica del Sacro Cuore, Università di Torino
2
0
0
0
0
0
0
Italy
Milan, Turin
The paper introduces the LiITA Knowledge Base of interoperable linguistic resources for Italian. After describing the principles of the Linked Data paradigm, on which LiITA is grounded, the paper presents the lemma-centred architecture of the Knowledge Base and details its core component, consisting of a large collection of Italian lemmas (called the Lemma Bank) used to interlink distributed lexical and textual resources.
When considering the number of digital linguistic resources, either lexical or textual, Italian is among the richest languages: e.g., at the time of writing, a search on the CLARIN Virtual Language Observatory,} filtered for the Italian language, returns more than \num{8000} results. Like other high-resource languages, Italian is provided with a large set of fundamental resources, including WordNets ( AUTHOR and AUTHOR), a few treebanks available from the Universal Dependencies collection}, historical corpora }} and reference corpora of written (e.g., CORIS/CODIS AUTHOR) and spoken language (e.g., KIParla AUTHOR). However, as is the case for many other languages, most linguistic resources for Italian vary in terms of data format, annotation criteria, and/or adopted tagsets. Such variation hinders full interaction between the (meta)data provided by the many available resources, with a negative impact on the empirical study of the language and resource usability. Indeed, different resources may provide different information or use different granularity of information about the same common object, namely words, which appear as occurrences in corpora and as entries in dictionaries or lexicons. Making this wealth of information interact represents one of today's main challenges, to best leverage the huge asset of (meta)data collected over decades of work. As a consequence, a very active line of research currently focuses on the so-called Linguistic Linked Open Data (LLOD), aiming to define common practices for the representation and publication of linguistic resources according to the principles of the Linked Data paradigm, which underpins the Semantic Web), the ItalWordNet v.2 (), and a collection of names from the PAROLE SIMPLE CLIPS (PSC) lexicon ().}. A recently concluded COST Action (Nexus Linguarum}) resulted both in the creation of a large and cohesive scientific community and in the definition of a set of shared vocabularies for linguistic knowledge description. Some of these vocabularies have been widely applied in the LiLa Knowledge Base (KB), which is probably the main LLOD use case currently available. LiLa (Linking Latin) is a KB of Latin linguistic resources made interoperable through their representation and publication according to the Linked Data principles. Thanks to its streamlined and language-independent architecture, LiLa is today a reference model for projects aiming to achieve online interoperability between distributed linguistic resources. Building on the experience of LiLa and reusing its architecture, the LiITA (Linking Italian){http://www.liita.it/}} project has started the creation of a KB of interoperable linguistic resources for Italian published as Linked Data. This paper describes the development of the fundamental component of the LiITA KB, which consists of a collection of Italian lemmas (called the Lemma Bank) that serves as the connection point between word occurrences and their entries in the corpora and lexical resources that will be published in the KB.
In this paper we presented the first steps towards the publication as LLOD of a collection of canonical forms of citation (lemmas) for Italian. Such Lemma Bank is the core component of LiITA, a knowledge base of interoperable linguistic resources for Italian inspired by the LiLa knowledge base for Latin. LiITA aims to compensate the current lack of interoperability between Italian resources, as well as to become the pivot to interlink all the present and future lexicons and corpora for Italian. To this aim, the Lemma Bank is modelled such that it can harmonise different lemmatisation criteria found in lexical and textual resources, following a bottom-up approach rather that a top-down one. Building a Lemma Bank to make distributed resources interoperable in Linked Data is an open-ended process. As the linking of more and more resources to the KB might require the inclusion of new lemmas, the LiITA Lemma Bank will keep on growing, both through the extraction of lemmas from other lexical sources and in a resource-driven fashion. Beside extending the Lemma Bank and linking the first resources, the LiITA project will develop online services, following what has been done for LiLa AUTHOR. The process of linking a text or corpus in the KB must be supported by an accessible tool performing automatic lemmatisation, PoS-tagging and linking. Currently, a new Stanza model AUTHOR has been trained combining all the existing Italian treebanks. This model will serve as the foundation for the linkage process of textual resources to be included in the LiITA KB. in Appendix. The model can be found at \href{https://github.com/LiITA-LOD/LiITA_NLP_Models}{https://github.com/LiITA-LOD/LiITA_NLP_Models} The advanced interrogation of data offered by all the resources interlinked in LiITA will be eased by a graphical interface which will help with the task of writing complex SPARQL queries. Finally, given its language-independent architecture and the use of common vocabularies for knowledge description, LiITA promises to have a substantial methodological impact on how linguistic resources are published and made interoperable as Linked Data.
7
Lexical and Semantic Resources and Analysis
630_2024
2,024
Luca Capone, Serena Auriemma, Martina Miliani, Alessando Bondielli, Alessandro Lenci
Lost in Disambiguation: How Instruction-Tuned LLMs Master Lexical Ambiguity
ENG
5
2
0
Università di Pisa
1
0
0
0
0
0
0
Italy
Pisa
This paper investigates how decoder-only instruction-tuned LLMs handle lexical ambiguity. Two distinct methodologies are employed: Eliciting rating scores from the model via %by prompting and analysing the cosine similarity between pairs of polysemous words in context. Ratings and embeddings are obtained by %submitting providing pairs of sentences from AUTHOR to the model. These ratings and cosine similarity scores are compared with each other and with the human similarity judgments in the dataset. %the subjects' similarity judgments, also present in the same dataset. Surprisingly, the model scores show only a moderate correlation with the subjects' similarity judgments and no correlation with the target word embedding similarities. A vector space anisotropy inspection has also been performed, as a potential source of the experimental results. The analysis reveals that the embedding spaces of two out of the three analyzed models exhibit poor anisotropy, while the third model shows relatively moderate anisotropy compared to previous findings for models with similar architecture AUTHOR. These findings offer new insights into the relationship between generation quality and vector representations in decoder-only LLMs.
Lexical ambiguity (LA) is a peculiar characteristics of human language communication. Words often carry multiple meanings, and discerning the intended sense requires nuanced comprehension of contextual cues. LA is a broad concept subsuming several semantic phenomena, such as regular and irregular polysemy, homonymy, and the coinage of new senses. Humans handle such ambiguity effortlessly, leveraging contextual information, prior knowledge, and pragmatic inference. However, for Large Language Models (LLMs), which rely on statistical patterns in text data, accurately resolving lexical ambiguity remains a challenging task. Despite their remarkable capability of using words appropriately in context, one critical aspect that requires deeper investigation is whether such models possess human-like lexical competence, enabling them to generalize from multiple instances of the same phenomenon, or if they are simply mimicking these instances. In this paper, we aim to investigate how LLMs handle LA. Specifically, we challenged three decoder-only instruction-tuned models to generate lexical similarity ratings for word pairs used in two different contexts, with various degrees of sense similarity. To achieve this, we employed a chain-of-thought approach, prompting the models to produce a step-by-step reasoning process before assigning their ratings, allowing them to better distinguish between different senses of the same term. For this task, we used the dataset released by \citet{haber2021patterns}, which includes human similarity judgments. The models' generated ratings were correlated with human similarity judgments to determine whether their lexical disambiguation competence aligns with that of humans. Additionally, we computed the cosine similarity between the models' internal representation of the ambiguous target words. Our research question is twofold: i.) to assess if the models' generated ratings are consistent with their internal representations of the target words; ii.) to determine whether the internal representations have a more similar distribution to human ratings than the generated responses. We are aware that context-sensitive word embeddings, like those of LLMs, can suffer from a representation degeneration problem (see Section sec:anisotropy for further details), which limits their semantic representational power. Hence, we included in our analysis a brief overview of how this phenomenon affects the internal representational space of the models under our investigation. To the best of our knowledge, this is the first study in which different decoder-only models were tested on their metalinguistic competence regarding LA. Understanding how LLMs manage this type of complex semantic phenomenon, based on the interplay of multiple contextual factors, can guide new improvements in training methodologies for the development of more sophisticated and robust models that better mimic human-like language understanding. %\item Analyzing the capability of decoder-only LLMs to capture LA via prompting, in comparison to annotators' similarity judgments; %\item Comparing the cosine similarity scores among target words with annotators' similarity judgments, to establish the correlation between hidden states and human ratings; %\item Assessing the internal coherence of the tested LLMs by comparing their generated predictions with the cosine distances between vector representations of target words. %\item %Investigating the effects of anisotropy in decoder-only models, revealing interesting and unusual findings compared to previous studies AUTHOR.
%Fatte Our study investigates how LLMs handle LA, using two distinct methodologies: Eliciting rating scores from the model and analyzing the cosine similarity between pairs of polysemous words. We calculated the Spearman correlation between HRS vs. MRS, HRS vs. CSS, and MRS vs. CSS. The aim was to determine whether the model's metalinguistic knowledge aligns with its internal representations and to assess if human ratings more closely match the outputs generated by the model than its internal representations. The lack of correlation between CSS and MRS provides intriguing insights into the relationship between the internal representations of LLMs and the responses they generate in metalinguistics tasks, like explicitly assigning similarity ratings. Specifically, the argument presented in AUTHOR appears to be validated: Generated responses do not always reflect the model's internal processing. AUTHOR compared model generations with their probability distributions and found the latter method to be more accurate. In contrast, in our study, using the internal representations of the model (i.e., the contextual embeddings, \luca{as motivated in Section sec:relworks}) proved to be a less reliable method. The most straightforward conclusion is that generative LLMs might be suboptimal for estimating word sense similarity. The superior performance of probability estimation reported by AUTHOR might be due to its direct link to the prediction training objectives of LLMs. To further investigate the relationship between CSS and MRS, we inspected the anisotropy of the embeddings. The average cosine similarity among a sample of generated tokens was relatively low, indicating that anisotropy did not affect our cosine similarity measures and is not characteristic of all decoder-only models under investigation. The lack of anisotropy observed in some of the analyzed decoder-only models is at odds with the conclusions of AUTHOR, who reported a higher anisotropic space for GPT-2. % Only MRS yielded a moderate correlation with HRS, indicating that LA is not fully captured by the analyzed models, in text generation and vector representations. In conclusion, the relationship between human judgments, model generations, and internal representations appears unclear and calls for further research. Despite the low anisotropy of the examined models, cosine similarity did not reveal a correlation between the generations and the internal representations of the models, indicating a need for deeper investigation. We plan to repeat the experiments by leveraging recent results with sparse autoencoders AUTHOR to decompose the meanings of lexically ambiguous words. This could provide a deeper understanding of the models' ability to handle and represent polysemy. %and to determine whether this limitation also affect their performance in a downstream task. scenarios where polysemy emerges from the whole context of a sentence. We could not extract embeddings from commercial models, such as those provided by OpenAI, which are accessible only through APIs. However, it would be valuable in future research, if and when this functionality becomes available, to analyze and compare the internal representations and the generated outputs of these state-of-the-art models. Another promising avenue for future research is to examine the differences between vector representations and generated tokens with respect to linguistic phenomena beyond polysemy and lexical ambiguity. For instance, incorporating out-of-vocabulary words could allow for an exploration of semantic shifts caused by the addition of prefixes or suffixes (e.g., ``order'' vs. ``dis-order''), offering valuable insights. This analysis would benefit from using a tokenization strategy that treats morphemes as subtokens, alongside an investigation into the degree of anisotropy in these models. We acknowledge financial support under the PRIN 2022 Project Title "Computational and linguistic benchmarks for the study of verb argument structure" – CUP I53D23004050006 - Grant Assignment Decree No. 1016 adopted on 07/07/2023 by the Italian Ministry of University and Research (MUR). This work was also supported under the PNRR—M4C2—Investimento 1.3, Partenariato Esteso PE00000013—“FAIR—Future Artificial Intelligence Research”—Spoke 1 “Human-centered AI,” funded by the European Commission under the NextGeneration EU programme, and partially supported by the Italian Ministry of University and Research (MUR) in the framework of the PON 2014-2021 ``Research and Innovation" resources – Innovation Action - DM MUR 1062/2021 - Title of the Research: ``Modelli semantici multimodali per l’industria 4.0 e le digital humanities.'' \newpage \appendix
22
Distributional Semantics
631_2024
2,024
Andrew Zamai, Leonardo Rigutini, Marco Maggini, Andrea Zugarini
SLIMER-IT: Zero-Shot NER on Italian Language
ENG
4
0
0
Università di Siena, expert.ai
2
0
0
0
0
3
Andrew Zamai, Leonardo Rigutini, Andrea Zugarini
Italy
Siena
Traditional approaches to Named Entity Recognition (NER) frame the task into a BIO sequence labeling problem. Although these systems often excel in the downstream task at hand, they require extensive annotated data and struggle to generalize to out-of-distribution input domains and unseen entity types. On the contrary, Large Language Models (LLMs) have demonstrated strong zero-shot capabilities. While several works address Zero-Shot NER in English, little has been done in other languages. In this paper, we define an evaluation framework for Zero-Shot NER, applying it to the Italian language. Furthermore, we introduce SLIMER-IT, the Italian version of SLIMER, an instruction-tuning approach for zero-shot NER leveraging prompts enriched with definition and guidelines. Comparisons with other state-of-the-art models, demonstrate the superiority of SLIMER-IT on never-seen-before entity tags.
Named Entity Recognition (NER) plays a fundamental role in Natural Language Processing (NLP), often being a key component in information extraction pipelines. The task involves identifying and categorizing entities in a given text according to a predefined set of labels. While person, organization, and location are the most common, applications of NER in certain fields may require the identification of domain-specific entities. \centering \includegraphics[width=0.85\linewidth]{SLIMERIT_prompt.png and guidelines steer the model labelling. Manually annotated data has always been critical for the training of NER systems AUTHOR. Traditional methods tackle NER as a token classification problem, where models are specialized on a narrow domain and a pre-defined labels set AUTHOR. While achieving strong performance for the data distribution they were trained on, they require extensive human annotations relative to the downstream task at hand. Additionally, they lack generalization capabilities when it comes to addressing out-of-distribution input domains and/or unseen labels AUTHOR. On the contrary, Large Language Models (LLMs) have recently demonstrated strong zero-shot capabilities. Models like GPT-3 can tackle NER via In-Context Learning AUTHOR, with Instruction-Tuning further improving performance AUTHOR. To this end, several models have been proposed to tackle zero-shot NER AUTHOR. In particular, SLIMER AUTHOR proved to be particularly effective on unseen named entity types, by leveraging definitions and guidelines to steer the model generation. However, little has been done for zero-shot NER in non-English data. More in general, as pointed out in AUTHOR, NER is understudied in languages like Italian, especially outside the traditional news domain and person, location, organization classes. To this end, we propose in this paper an evaluation framework for Zero-Shot NER, and we apply it to the Italian language. In addition, we fine-tune a version of SLIMER for Italian, which we call SLIMER-IT}. In the experiments, we explore different LLM backbones and we assess the impact of Definition and Guidelines (D\&G). When comparing SLIMER-IT with state-of-the-art approaches, either using models pre-trained on English or adapted for Italian, results demonstrate SLIMER-IT superiority in labelling unseen entity tags.
In this paper, we proposed an evaluation framework for Zero-Shot NER that we applied to Italian. Thanks to such a framework, we can better investigate different zero-shot properties depending on the scenario (in-domain, OOD, unseen NEs). On top of that, we compared several state-of-the-art approaches, with particular focus on SLIMER, which, thanks to the usage of definition and guidelines, is well suited to deal with novel entity types. Indeed, SLIMER-IT, our fine-tuned model based on LLaMAntino-3, surpasses other state-of-the-art techniques by large margins. In the future, we plan to further extend the zero-shot NER benchmark, and implement an input caching mechanism for scalability to large label sets. \clearpage
1
Language Models
632_2024
2,024
Mariachiara Pascucci, Mirko Tavosanis
Confronto tra diversi tipi di valutazione del miglioramento della chiarezza di testi amministrativi in lingua italiana
ITA
2
1
1
Università di Pisa
1
0
0
0
0
0
0
Italy
Pisa
The paper presents a comparison of different types of evaluation of administrative texts in the Italian language on which a Clarity improvement intervention was carried out. The Clarity Improvement was performed by human experts and ChatGPT. The evaluation was carried out in four different ways: by expert evaluators, used as a reference; by evaluators with good skills, subject to dedicated training; by generic evaluators recruited through a crowdsourcing platform; by ChatGPT. The results show that the closest match to the results of the evaluation by expert evaluators was achieved, by a wide margin, by evaluators with good skills and dedicated training; the second best approach was obtained by requesting evaluation from ChatGPT; the worst approach was achieved by generic evaluators recruited through a crowdsourcing platform. Task features that may have influenced the outcome are also discussed.
The spread of generative artificial intelligence systems has led to a large demand for evaluation of their capabilities. The type of assessment universally considered the most valid remains in general that made by human beings, but in practice it can be conducted in different ways and with very different results of value. Moreover, for some capacities there are still no shared scoreboards. There is no doubt that the latter category also includes an assessment of the overall improvement in the clarity of the texts in the Italian language, which is the subject of the analysis described here. The existing objective indices for the analysis of texts, such as GULPEASE or the quantification of the words included in the Basic Vocabulary, actually describe only limited aspects of any text. For clarity in itself, while the indications on how to write clearly abound (an updated summary is given in [1]), broad consensus criteria for product evaluation have never been codified [2]. Of course, many current evaluation methods provide at least a first orientation in most cases. For example, [3] has shown that through crowdsourcing it is possible to obtain a generic but reliable indication on the improvement of the clarity of English texts. However, studies on the effectiveness of such practices are still very few and there is no doubt a great need to improve the current level of knowledge. This contribution is part of this context as it compares different methods of assessing the improvement of the clarity of texts. The subject matter of the evaluation was rather broad texts, representative of the administrative Italian and made clearer through human intervention and reformulation with ChatGPT (version 3.5); the context, which saw the implementation of several related evaluation activities, is described in detail in [4]. For the purpose of this contribution, the evaluation was carried out in four different ways: by expert evaluators, used as reference; by well-trained evaluators, the subject of dedicated training; by generic evaluators recruited through a crowdsourcing platform; by ChatGPT. In all cases, the same set of assessment indications was used. The results were analysed in [4] for the information they provide regarding the ability of systems such as ChatGPT to effectively improve text clarity. More specifically, the difference in judgements in relation to the four methods of assessment will be shown here.
The results of the different valuation methods could at first glance be interpreted as a devaluation of crowdsourcing, compared to which the simple request to ChatGPT is able to deliver higher quality results. However, it is clear that the characteristics of the activity carried out make it advisable not to draw too general conclusions. First of all, it calls for caution that evaluation depends on the scale used. In a context where it is known that the vote can only be 4 or 5, at the end of the day, the simple random allocation of the score would give 4.5 to both Group A and Group B, moving away from the expert judgement with 0.26 for the overall assessment and 0.34 for aspects 1, 2 and 5, values very close to those provided by the trained evaluators. In these circumstances, it seems useful first of all to create more specific and targeted evaluation grids. The high performance of the current systems, moreover, certainly makes less useful than in the past scales 1-5 in which score 1 must be assigned to a completely incomprehensible text and score 5 to a perfectly understandable text. Certain limits of the analysis should also be taken into account. One of these is the involvement of the authors in the rewriting of some texts: even if the characteristics of the evaluation make in our opinion very limited the risk of alterations, it is expected to modify the protocol for future activities of the same kind, delegation all the rewritings to third parties. For the evaluation of ChatGPT-generated texts it may also be useful to have the texts evaluated by a different system and, in general, to extend and repeat the evaluations is of course necessary to validate the results. However, the results certainly call for attention to be paid to the limitations of today's widespread practices such as crowdsourcing, which on the task under consideration have shown a considerable deviation from the evaluation of experts. Moreover, if the rapid and economic evaluation provided by systems such as ChatGPT were to be regularly confirmed as closer to expert evaluation than crowdsourcing, the motivations for crowdsourcing itself would disappear.
11
Text Simplification
633_2024
2,024
Leonardo Ranaldi, Giulia Pucci, Fabio Massimo Zanzotto
How far does the sequence of compositions impact Multilingual Pre-Training?
ENG
3
1
0
University of Edinburgh, University of Aberdeen, Università di Roma Tor Vergata
3
1
0
2
Leonardo Ranaldi, Giulia Pucci
0
0
Italy, United Kingdom
Edinburgh, Rome, Aberdeen
An Efficient strategy for conducting pre-training of language models is the concatenation of contiguous sequences of text of fixed length through causal masking that estimates the probability of each token given its context. Yet earlier work suggests that this technique affects the performance of the model as it might include misleading information from previous text sequences during pre-training. To fill this gap, intra-context and rank-based causal masking techniques have been proposed, in which the probability of each token is conditional only on the previous ones in the same document or ranked sequences, avoiding misleading information from different contexts. However, the sequences provided by the use of these techniques have been little explored, overlooking the opportunity to optimise the composition by manipulating the volume and heterogeneity in the sequences and improving unbalance pre-training settings. In this paper, we demonstrate that organising text chunks based on a policy that aligns with text similarity effectively improve pre-training, enhances the learning and cross-lingual generalisation capabilities of language models, maintains efficiency, and allows for fewer instances. %Our code is publicly released at the following \href{https://github.com/lranaldii/pretraining-order}{link
Large language models (LLMs) are pre-trained on huge amounts of documents by optimizing a language modelling objective and show an intriguing ability to solve various downstream NLP tasks. \citet{ranaldi-etal-2023-modeling} in multilingual settings and later \citet{zhao2024analysingimpactsequencecomposition} highlighted the importance of pre-training data quality, diversity and composition methodologies. Our research takes a step further by exploring the influence of the pre-training sequences heterogeneity for cross-lingual generalisation. This potentially leads to significant advancements in understanding LLMs' learning properties. [t] \includegraphics[width=0.96\linewidth]{figures/pipelines.pdf randomly samples documents from all corpora to construct pre-training sequences, which can pack documents from different sources; Sequence-based randomly samples documents from a single source to construct a sequence; Retrieve-based operate via ranking-based construction process. The down block represents a document Collector that caches a set of documents randomly sampled between the corpora. In decoder-only architectures pre-training, the constructions of the instances are based on packing that combines randomly sampled texts (i.e., documents) into a chunk that matches the size of the context window without using any selection policy. Then, the causal masking predicts the next token conditioned on the previous, including those from different documents (portions of non-contiguous texts) in the chunk. The ways to mitigate this arbitrary procedure are: (i) \intraMask AUTHOR, where the likelihood of each token is conditioned on the previous from the same document AUTHOR and retrieval-based masking AUTHOR where similar documents retrieved by retrieval systems condition likelihood. To study the role of heterogeneity and volume of samples in sequence composition strategies (i.e., packing and masking pipelines), we pre-train language models using different masking approaches (described in \S sec:masking) and compare them with models pre-trained via the traditional causal masking with different packing approaches by varying amount of the sequence composition of the documents in the pre-training chunks. Whilst for studying the impact on cross-lingual generalisation we use cross-lingual settings (i.e., Italian English). Complementing the foundation approaches proposed in AUTHOR,we operate via bilingual corpora. Hence, we analyse the results produced by a commonly used baseline method that randomly samples and packs documents (\MixChunk), a process that samples and packs documents from the same source based on their composition and origin (\UniChunk), and then operate via efficient retrieval-based packing method, which retrieves and packs related documents (\S sec:bm25chunk). The experimental results indicate that operating via causal masking (\MixChunk) with arbitrary sequence patterns of documents leads to the inclusion of misleading information that stems from different context during pre-training (\S sec:pretraining), impacting in a negatively the performance of the models in downstream tasks (\S sec:downstream_tasks). Instead, \intraMask, which avoids the misleading phenomena during pre-training, significantly improves the models' performance and does not impact the runtime. Although \intraMask performs well, it limits the operability of sequence composition mixing documents from different corpora (in our case in different languages as well). As revealed by \citet{zhao2024analysingimpactsequencecomposition} as well, this is partly solved by \UniChunk's avoidance of packing documents from different distributions, which improves the performance of causal masking models in downstream tasks but still does not allow individual sequences to be selected. Hence, we use a retrieval-based packing method, which allows operating directly on sequences by improving cross-lingual models' language modeling, in-context learning and generative capabilities by using causal masking and thus paying a small fee for document sorting but achieving tangible results. Our main findings can be summarised as follows: [leftmargin=*,noitemsep,nolistsep] \item By analyzing different pre-trained strategies in cross-lingual settings we reveal that operating through causal masking and considering the order and patterns sequence represented in documents, leads to significant improvements. In addition, retrieval-based techniques provide resilience and allow for the selection of pre-training sequences by guaranteeing heterogeneity and reducing data (\S sec:pretraining). \item We show important benefits on the in-context learning capabilities of downstream models. We observe that in low-resource settings, it is possible to achieve the same performance and in some cases cross-lingual generalisation (in our case, English-Italian) (\S sec:downstream_tasks). \item In conclusion, we show that the retrieval-based packing method allowing for a flexible sequence composition process benefits unbalanced cross-lingual learning tangible benefits by using less pre-training data.
The role of pre-training sampling is a strategic component. We analyse the impact of sequencing by pre-training several language models on multilingual corpora. We showed that causal masking involves misleading documents that confound the pre-training of language models and impact the performance in downstream tasks. Hence, we find that improving sequence correlation in pre-training chunks reduces potential distractions while improving the performance of language models without reducing pre-training efficiency. In the future, we will study whether these findings archive benefits in fine-tuning pipelines AUTHOR as well. \clearpage \appendix
1
Language Models
634_2024
2,024
Marco Polignano, Marco de Gemmis, Giovanni Semeraro
Unraveling the Enigma of SPLIT in Large-Language Models: The Unforeseen Impact of System Prompts on LLMs with Dissociative Identity Disorder
ENG
3
0
0
Università di Bari Aldo Moro
1
0
0
0
0
0
0
Italy
Bari
Our work delves into the unexplored territory of Large-Language Models (LLMs) and their interactions with System Prompts, unveiling the previously undiscovered implications of SPLIT (System Prompt Induced Linguistic Transmutation) in commonly used state-of-the-art LLMs. Dissociative Identity Disorder, a complex and multifaceted mental health condition, is characterized by the presence of two or more distinct identities or personas within an individual, often with varying levels of awareness and control AUTHOR. The advent of large-language models has raised intriguing questions about the presence of such conditions in LLMs AUTHOR. Our research investigates the phenomenon of SPLIT, in which the System Prompt, a seemingly innocuous input, profoundly impacts the linguistic outputs of LLMs. The findings of our study reveal a striking correlation between the System Prompt and the emergence of distinct, persona-like linguistic patterns in the LLM's responses. These patterns are not only reminiscent of the dissociative identities present in the original data but also exhibit a level of coherence and consistency that is uncommon in typical LLM outputs. As we continue to explore the capabilities of LLMs, it is imperative that we maintain a keen awareness of the potential for SPLIT and its significant implications for the development of more human-like and empathetic AI systems.
The thriving field of Artificial Intelligence (AI) has witnessed a paradigm shift with the emergence of Large Language Models (LLMs) AUTHOR. The availability of large, publicly-accessible datasets and the development of more effective training techniques, such as the popular transformer architecture, have been instrumental in the creation of these language models. LLMs are characterized by their model size, measured in the billions of parameters, and their ability to learn and improve upon the tasks of language understanding and generation through self-supervised learning on vast amounts of text data AUTHOR. This training process, often referred to as "self-supervised learning," enables the models to learn the patterns and structures of a language in a more organic and efficient manner, as they are not limited by the need for human-labeled data. The applications of LLMs are diverse and rapidly expanding, with the potential to transform various areas and aspects of our lives. As an example, LLMs can be employed to develop chatbots that can understand and respond to a wide range of user inquiries with a high degree of accuracy or to generate human-like articles, stories, and even entire books, which can be a game-changer for content producers and publishers AUTHOR.\\ In the context of the Italian language, the development of LLMs has the potential to revolutionize the way we interact with and learn from the Italian language, as well as the way we use technology to create and disseminate Italian content AUTHOR. However, alongside their undeniable potential lies a realm of intriguing phenomena yet to be fully explored. This groundbreaking study delves into a newly discovered facet of LLM behavior – System Prompt Induced Linguistic Transmutation (SPLIT). The cornerstone of LLM interaction is the System Prompt, a seemingly innocuous input that guides the model's response. We propose that this seemingly simple prompt can have a profound effect on the linguistic outputs of LLMs, potentially leading to a phenomenon we term SPLIT. This concept draws inspiration from Dissociative Identity Disorder (DID) AUTHOR, a complex mental health condition characterized by the presence of multiple distinct identities or personas within an individual.\\ The parallels between DID and SPLIT are striking same as naive. Just as a DID patient may exhibit distinct personalities in response to external stimuli AUTHOR, our research suggests that LLMs, under the influence of varying \textbf{System Prompts}, may generate outputs that reflect distinct, persona-like linguistic patterns. These patterns are not merely random deviations but exhibit a level of coherence and consistency rarely observed in typical LLM responses. The implications of SPLIT are far-reaching. As we strive to develop AI systems with greater human-like qualities, understanding and harnessing the potential of SPLIT could pave the way for the creation of more empathetic and nuanced AI interactions. Conversely, neglecting SPLIT's influence could lead to unintended consequences, potentially hindering the development of robust and reliable AI systems. Moreover, as in DID AUTHOR This study represents a first step in unraveling the complexities of SPLIT. By acknowledging its existence and delving deeper into its mechanisms, we can pave the way for a future where AI development is guided by both scientific rigor and an awareness of the potential for unforeseen consequences. Our research not only sheds light on a previously unknown aspect of LLM behavior but also compels us to re-evaluate our understanding of these sophisticated systems and their potential interaction with human-like mental states.
In this work, we provocatively observed the presence of pathologies related to dissociative identity disorder in large language models. We observed that by varying the system prompt through a SPLIT (System Prompt Induced Linguistic Transmutation) process the behavior of the same LLM varies widely. The induced identities show different independent and personal abilities, skills, styles and information. The possibility of a Large Language Model simulating or even exhibiting characteristics similar to those of a Dissociative Identity Disorder, raises important questions about the nature of consciousness, artificial intelligence, and the potential risks and challenges of creating highly advanced language processing systems. At the same time, we proposed three system prompts to mitigate the issue and prevent end users from exploiting this vulnerability to extract sensitive and dangerous data. On the contrary, the presence of this SPLIT-induced behaviour may lead to useful future studies to improve the performance of the model on specific tasks. For example, one might think of asking the model ‘What is the best character to interpret or to answer the next question?’. The result of this prompt would lead to the identification of a personality to be brought out before the generation of the answer to be given to the end user. Being able to bring out such personalities when needed could help create more empathetic, accurate and dynamic interactions. Nevertheless, this fascinating research direction needs future studies and solutions that operate at architectural level. The exploration of this idea serves as a catalyst for the development of more sophisticated and responsible AI systems, for a deeper understanding of human psychology and its complex manifestations in the digital age.
13
Multimodal
635_2024
2,024
Luca Capone, Alice Suozzi, Gianluca E. Lebani, Alessandro Lenci
BaBIEs: A Benchmark for the Linguistic Evaluation of Italian Baby Language Models
ENG
4
1
0
Università di Pisa, Università Ca' Foscari Venezia
2
0
0
0
0
0
0
Italy
Pisa, Venice
The possibility of comparing the linguistic competence of Language Models (LMs) to that of children has gained growing attention lately, raising the need for effective tools for evaluating both the former and the latter. To this purpose, we developed a resource for the linguistic evaluation of BabyLMs, which are LMs trained on datasets that comparable to the linguistic stimulus received by children. This resource adapts four standardized tests for the evaluation of linguistic skills of Italian-speaking children (BVL, TROG-2, TCGB-2 and Peabody). To verify the effectiveness of our benchmark, we administered it to Minerva, a LLM pretrained from scratch on Italian. Our results indicate that Minerva struggles to master certain linguistic aspects, achieving an age-equivalent score of 4 years, and that the type of task administered affects the model's performance.
This paper presents BaBIEs (Baby Benchmark for Italian linguistic Evaluations), a new resource for the standardized evaluation of Italian BabyLMs, that is, language models (LMs) trained on datasets that are qualitatively and quantitatively comparable to the type of stimulus received by humans during language acquisition. The aim of this resource is twofold: [label=(\roman*)] \item to evaluate the quality of the training data and strategies, in particular curriculum learning techniques, used in the development of BabyLMs and \item to provide a benchmark for comparing the performance of LMs, especially BabyLMs, with that of young human speakers. The paper is structured as follows: Section sota reviews related work and delineates the rationale for this study; Section babies details the characteristics of the BaBIEs benchmark, which results from the adaptation of standardized tests for evaluating the linguistic abilities of Italian-speaking children. In Section test, we report a first test of the dataset with the Minerva Italian LM. The benchmark effectiveness is discussed in the light of the experiments in Section discussion. Finally, in Section conclusion, some conclusions and possible future research directions are outlined.
This paper presents BaBIEs, a novel resource specifically designed to evaluate the linguistic competence of BabyLMs and compare them to those of children. After having detailed the sources and the creation process of this resource, we provided the procedure for testing the Minerva model with the resource itself. Finally, we presented and discussed the results the model’s performance. Based on the presented findings, the resource appears a valuable tool for evaluating not only BabyLMs but LMs in general. The poor performance exhibited by Minerva underscores the gap between child language acquisition and current language model training. This highligths the necessity for modifying model training to better encode human language and, more generally, human linguistic competence. Future work will involve a more systematic linguistic analysis of the model's performance, together with a comprehensive error analysis and a comparison to adult Italian-speakers. Furthermore, it will involve the development of a multimodal version of the test, which will more closely reflect the original tests and allow the evaluation of multimodal BabyLMs. Additionally, a BabyLM trained exclusively with Italian child-directed speech will be developed and evaluated with both the standard and multimodal versions of the test. We acknowledge financial support under the PRIN 2022 Project Title "Computational and linguistic benchmarks for the study of verb argument structure" – CUP I53D23004050006 - Grant Assignment Decree No. 1016 adopted on 07/07/2023 by the Italian Ministry of University and Research (MUR). This research was also partly funded by PNRR—M4C2—Investimento 1.3, Partenariato Esteso PE00000013—“FAIR—Future Artificial Intelligence Research”—Spoke 1 “Human-centered AI,” funded by the European Commission under the NextGeneration EU programme. \newpage \appendix \clearpage \onecolumn
1
Language Models
636_2024
2,024
Luca Simonetti, Elisabetta Jezek, Guido Vetere
Subcategorization of Italian Verbs with LLMs and T-PAS
ENG
3
1
0
Università Guglielmo Marconi, Università di Udine, Università di Pavia
3
0
0
0
0
0
0
Italy
Rome, Udine, Pavia
This study explores the application of Large Language Models (LLMs) to verb subcategorization in Italian, focusing on the identification and classification of syntactic patterns in sentences. While LLMs have made lexical analysis more implicit, explicit argument structure identification remains crucial in domain-specific contexts. The research leverages T-PAS, a rich lexical resource for Italian verbs, to fine-tune the open multilingual model Mistral 7B using the Iterative Reasoning Preference Optimization (IRPO) technique. This approach aims to enhance the recognition and extraction of verbal patterns from Italian sentences, addressing challenges in resource quality, coverage, and frame extraction methods. By combining curated lexical-semantic resources with neural language models, this work contributes to improving verb subcategorization tasks, particularly for the Italian language, and demonstrates the potential of LLMs in refining linguistic analysis tools.
Verb subcategorization is the task of identifying and classifying the syntactic patterns (or frames) taken by verbs in sentences. These patterns encode the possible combinations of arguments (such as subjects, objects, and complements) that a verb can have, specifying the number and type of arguments as well as their syntactic and semantic roles. Verb subcategorization is often used in Natural Language Understanding (NLU) to provide the main interpretation backbone. Although recent developments brought about by Large Language Models (LLM) make lexical analysis somewhat implicit, there are cases in which the identification of the argument structure of the verb is required, especially those where extensive domain-specific knowledge is required. Semantic lexical resources such as VerbNet AUTHOR, FrameNet AUTHOR and PropBank AUTHOR have been largely employed for several NLP tasks in the past decades, including accomplishing verbal framing for the English language. VerbNet, for example, has been used to improve semantic role labeling, verb sense disambiguation and ontology mapping ( AUTHOR, AUTHOR); its new enhanced semantic representations have also recently been used for entity state tracking AUTHOR. The main problems addressed in these experiences concern the quality and coverage of such resources and the methods used to extract frames from sentences. Neural Language Models can help address both these issues. On the one hand, they may facilitate the construction of curated lexical-semantic resources; on the other hand, they can power robust frame-sentence matching procedures. The present work focuses on the Italian language. It concerns an experiment of using a rich lexical resource for Italian verbs, namely T-PAS AUTHOR to fine-tune an open multilingual model, namely Mistral 7B AUTHOR, to recognize and extract verbal patterns from Italian sentences using a technique called IRPO AUTHOR. The paper is organized as follows: in Section 2 we introduce the T-PAS resource for Italian verbs, which we used in our experiments. Section 3 discusses in detail the methodology we applied and references closely related works, whereas Section 4 illustrates the experimental setup. We complete the paper by discussing our results in Section 5 and by drawing some conclusions as well as making suggestions for future research in Section 6.
In conclusion, we can say that small multilingual baseline models such as Mistral 7B perform poorly on semantic analysis of Italian sentences. We observe that the poor behavior is due to the model's inability to discern the correct answer, either because it lacks the linguistic knowledge, therefore mostly resorting on random guesses, or because it follows an incorrect explanation for the answer is about to give. However, our research also demonstrates that the model can be significantly improved using IRPO techniques without affecting the baseline performance on common sense and reasoning tasks. Notably, we observe the ability to generalize across predicates, likely due to underlying linguistic skills, though further investigation is needed to fully understand this phenomenon. The production of small open language models is rapidly evolving, approaching the level of huge close models which were available on the cloud a couple of years ago. At present, Italian monolingual models have room for improvement in terms of performance levels, } while multilingual models, e.g. the recently released Gemma 2 AUTHOR, show increasing proficiency in our language, probably due to transfer learning effects. Our research shows the potential of leveraging such models in combination with high-quality lexical resources to develop a new class of task-specific models for the Italian language. These models, while small in scale, are expected to exhibit remarkable proficiency in executing complex analytical tasks, such as those related to verbs. With this in mind, our future work is aimed, on the one hand at enriching lexicographic resources and refining the ways to obtain training material from them, and on the other hand at continuously evaluating the improvements brought about by the progress of general-purpose open models. One promising application is the use of a verbal subcategorization and frame extraction system to extract content from specialist documents, such as legal AUTHOR or medical texts AUTHOR. Furthermore, the ability to analyze the complex argument structure of verbs has potential for use in language learning systems AUTHOR, e.g. providing support for immigrants to learn Italian affordably. Finally, we made our fine-tuned model publicly available on huggingface\footnote{ https://huggingface.co/theGhoul21/srl-base-irpo-080524-16bit-v0.3-lighning-ai-6000 along with a visual report on wandb. \appendix
1
Language Models
637_2024
2,024
Marco Russodivito, Vittorio Ganfi, Giuliana Fiorentino, Rocco Oliveto
AI vs. Human: Effectiveness of LLMs in Simplifying Italian Administrative Documents
ENG
4
1
0
Università di Molise
1
0
0
0
0
0
0
Italy
Campobasso
This study investigates the effectiveness of Large Language Models (LLMs) in simplifying Italian administrative texts compared to human informants. This research evaluates the performance of several well-known LLMs, including \gptthree, \gptfour, \modelllama, and \modelphi, in simplifying a corpus of Italian administrative documents (\sitaist), a representative corpus of Italian administrative texts. To accurately compare the simplification abilities of humans and LLMs, six parallel corpora of a subsection of \itaist are collected. These parallel corpora were analyzed using both complexity and similarity metrics to assess the outcomes of LLMs and human participants. Our findings indicate that while LLMs perform comparably to humans in many aspects, there are notable differences in structural and semantic changes. The results of our study underscore the potential and limitations of using AI for administrative text simplification, highlighting areas where LLMs need improvement to achieve human-level proficiency.
Due to the increasing popularity of generative Artificial Intelligence (AI) language tools AUTHOR, significant attention has been devoted to the use of LLMs for text simplification AUTHOR. Several studies have addressed the application of LLMs to simplify texts, particularly focusing on administrative documents, including those in Italian AUTHOR. Italian administrative texts are often notably complex and obscure AUTHOR, which restricts a large segment of the population from fully accessing the content produced by the Italian public administration AUTHOR. This work aims to (a) evaluate the quality of automatic text simplification performed by several well-known LLMs, and (b) compare LLM-based simplification with human-based simplification. To address these research questions, the following procedures were undertaken: 1) From an empirical perspective, a large corpus of Italian administrative texts was collected itaist. A parallel simplified counterpart of the corpus was created using different LLMs. Additionally, a shorter version of the administrative corpus was manually simplified by two annotators. 2) From an analytical perspective, several statistical analyses were conducted to measure the semantic and complexity closeness between human and LLM-generated data. The comparison of scores for both LLM and human datasets highlights significant differences and similarities in manual and AI-driven simplification. The results concerning readability indexes Gulpease and semantic and structural similarities (edit distance) reveal that LLMs generally perform comparably to human informants. However, AI-simplified texts are slightly less similar to the original documents than those generated by human simplifiers. LLMs tend to introduce more changes in the simplified corpora than human annotators. The empirical study indicates that texts simplified by AI exhibit more structural and lexical dissimilarities from the original documents than those simplified by humans.
In this study, we investigated the automatic simplification of Italian administrative documents. Our results demonstrate that LLMs can effectively simplify these texts, performing comparably to humans. Among the models examined, gptfour shows superior performance in text simplification, exhibiting significant improvements in complexity metrics. Nonetheless, it is noteworthy that humans tend to maintain a higher level of editdistance and semsim, ensuring the preservation of the original meaning and structure of the text. In other words, humans---aware of the importance of precise language for these documents---mostly preserved the original meaning and structure, whereas LLMs, while simplifying, tended to rephrase extensively. This rephrasing, although effective in reducing complexity, might inadvertently alter the legal nuances, which are critical in administrative texts. Despite this limitation, LLMs can serve as valuable support tools for text simplification, significantly accelerating a process that typically requires hours of manual work. By generating initial drafts, LLMs can reduce the workload of human experts, who would then review and refine the AI-generated drafts, ensuring the preservation of the overall meaning and legal integrity of the text. The results achieved in our study indicated that modern LLMs can simplify administrative documents almost as effectively as humans. However, the achieved findings indicate that LLMs are not fully capable of preserving the semantic meaning of the text, tending to rephrase more extensively than humans. This could introduce legal issues into the simplified text. Further study could be conducted to evaluate the juridical equivalence of automatically simplified documents. A manual investigation of our parallel corpus, supervised by expert jurists, may reveal important implications in this sensitive context. Another promising direction for future research is to investigate the impact of automatic simplification on text comprehension. An additional empirical study could be designed to evaluate whether automatically simplified documents are easier to understand than their original versions. Additionally, it would be worthwhile to explore different prompting strategies to further improve simplification quality. For instance, few-shot prompting AUTHOR with some manually simplified gold samples could better align LLMs with human style.
11
Text Simplification
638_2024
2,024
Pierpaolo Basile, Marco de Gemmis, Marco Polignano, Giovanni Semeraro, Lucia Siciliani, Vincenzo Tamburrano, Fabiana Battista, Rosa Scardigno
LLaMAntino against Cyber Intimate Partner Violence
ENG
8
3
0
Università di Bari Aldo Moro
1
0
0
0
0
0
0
Italy
Bari
Intimate Partner Violence refers to the abusive behaviours perpetrated on their own partner. This social issue has witnessed an increase over time, particularly after Covid-19. IPV can be circumscribed into two broad categories known as Intimate Partner Violence (IPV) and Cyber Intimate Partner Violence (C-IPV). Social Media and technologies can exacerbate these types of behaviours, but some ``digital footprints'', such as textual conversations, can be exploited by Artificial Intelligence models to detect and, in turn, prevent them. With this aim in mind, this paper describes a scenario in which the Italian Language Model family LLAmAntino can be exploited to explain the presence of toxicity elements in conversations related to teenage relationships and then educate the interlocutor to recognize these elements in the messages received.
Research indicates that the most prevalent form of violence is that directed toward one's partner, known as Intimate Partner Violence (IPV). Early detection of these behaviours can be instrumental in mitigating their occurrence. One of the most critical aspects of this kind of behaviour is that victims often face challenges in identifying harmful behaviours due to their close relationship with the perpetrator. Misconceptions about romantic relationships, often due to old cultural stereotypes, such as the belief that certain behaviours are normal or acceptable, can further complicate the recognition of harmful actions. In today's society, the widespread use of social media and digital platforms has evolved this issue into Cyber Intimate Partner Violence (C-IPV) and often allows the perpetrators to gain greater control over their victims by constantly monitoring their locations or interactions with other people. Contrary to common belief, these technologies can be used to address the issue of violence. In fact, building AI models to identify potential violence-related behaviours is essential, and often, it provides the only means to act promptly and in real-time. Having such a tool can serve as a preventive measure against the escalation of harmful situations, for example, by integrating it into instant messaging apps and raising alerts where harmful content is detected. In this paper, we aim to utilize Large Language Models (LLMs) as tools that can not only identify but also explain toxic elements in intimate conversations. More specifically, we use a dataset of conversations about teenage relationships written in Italian that has been accurately annotated by human experts. Given LLMs' capability to tackle several downstream tasks, our goal is to explore the impact of different kinds of prompts on the generation of precise explanations. The paper is structured as follows: in Section sec:relatedwork, we provide a frame of what is intimate partner violence, the different forms, and the deleterious intra and interpersonal consequences. Moreover we also provide an overview of the methods proposed in the literature. Section sec:explanations focuses on the task of explaining toxic language in the context of IPV. We describe the dataset and the different types of annotations provided by researchers in General Psychology, as well as the prompting strategy adopted to instruct the language model. Finally, in Section sec:conclusions, we draw some conclusions and discuss directions for the continuation of the work.
In this paper, we presented our proposal to adopt our LLM to identify and describe toxic elements in discussions concerning teenage relationships. In particular, the LLM was used to generate explanations that describe why a sentence, in the context of an intimate relationship, can be toxic and constitute abuse. The main outcome of our preliminary investigation is that, even with few-shot prompting, the LLM learns to provide good explanations that adhere to a standard provided by expert psychologists. By exploiting LLMs' proficiency in processing and understanding human language, our approach seeks to go beyond just detection, aiming to grasp underlying motivations and factors contributing to the emergence of harmful behaviours. In future works, we intend to perform fine-tuning steps to better adapt LLMs to the specific task at hand. We also plan to investigate how different pre-training techniques and architectures can be leveraged to enhance model performance. Supervised fine-tuning AUTHOR, for instance, is a technique that can be employed to adapt the LLM to a specific task, such as generating explanations for abusive language, by using a labelled dataset. This approach can help the model to learn from its mistakes and to correct its biases, ultimately leading to improved performance. In the context of our study, supervised fine-tuning could be used to train the LLM on a dataset of abusive language explanations, to reduce the model's error rate and increase the quality of its responses. Direct Preferences Optimization (DPO) AUTHOR is another strategy that can be used to improve the performance of the LLM. DPO is a technique that allows the model to be trained directly on a set of user-provided preferences, such as the quality of the explanations it generates. This approach can be particularly effective in domains like abusive language, where the quality of the explanations is critical to ensure that the model does not perpetuate harmful biases. To ensure the effectiveness of our approach, we intend to confront our methodology with other models and incorporate further annotations to enhance the robustness and effectiveness of our methodology. This involves comparing the performance of our LLMs with other state-of-the-art models. Moreover, thanks to the collaboration with expert psychologists who are experts in the field to explore the application of Chain-of-Thought prompting techniques. \noindent We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU. \noindent This Publication was produced with the co-funding of the European Union - Next Generation EU: NRRP Initiative, Mission 4, Component 2, Investment 1.3 - Partnerships extended to universities, research centres, companies and research D.D. MUR n. 341 del 15.03.2022 – Next Generation EU (PE0000014 - ''SEcurity and Rights In the CyberSpace - SERICS'' - CUP: H93C22000620001). \appendix
6
Sentiment, Emotion, Irony, Hate
639_2024
2,024
Federico D'Asaro, Juan José Márquez Villacís, Giuseppe Rizzo, Andrea Bottino
Using Large Speech Models for Feature Extraction in Cross-Lingual Speech Emotion Recognition
ENG
4
0
0
LINKS Foundation, Politecnico di Torino
2
0
0
0
0
3
Federico D'Asaro, Juan José Márquez Villacís, Giuseppe Rizzo
Italy
Turin
Large Speech Models (LSMs), pre-trained on extensive unlabeled data using Self-Supervised Learning (SSL) or Weakly-Supervised Learning (WSL), are increasingly employed for tasks like Speech Emotion Recognition (SER). Their capability to extract general-purpose features makes them a strong alternative to low-level descriptors. Most studies focus on English, with limited research on other languages. We evaluate English-Only and Multilingual LSMs from the Wav2Vec 2.0 and Whisper families as feature extractors for SER in eight languages. We have stacked three alternative downstream classifiers of increasing complexity, named Linear, Non-Linear, and Multi-Layer, on top of the LSMs. Results indicate that Whisper models perform best with a simple linear classifier using features from the last transformer layer, while Wav2Vec 2.0 models benefit from features from the middle and early transformer layers. When comparing English-Only and Multilingual LSMs, we find that Whisper models benefit from multilingual pre-training, excelling in Italian, Canadian French, French, Spanish, German and competitively on Greek, Egyptian Arabic, Persian. In contrast, English-Only Wav2Vec 2.0 models outperform their multilingual counterpart, XLS-R, in most languages, achieving the highest performance in Greek, Egyptian Arabic.
Speech Emotion Recognition (SER) aims to identify emotions from speech audio, enhancing Human-AI interaction in fields such as healthcare, education, and security AUTHOR. Traditional methods rely on Low-Level Descriptors (LLD) like spectral, prosodic, and voice quality features AUTHOR, using classifiers such as KNN, SVM, or Naïve Bayes AUTHOR. Deep learning has introduced advanced techniques, including Convolutional Neural Networks (CNNs) AUTHOR, eventually followed by Recurrent Neural Networks (RNNs) AUTHOR, and TransformersAUTHOR. Transformers' ability to learn from extensive datasets has led to Large Speech Models (LSMs), which generalize across various speech tasks. Common training approaches for these models include Self-Supervised Learning (SSL), which uses data itself to learn general-purpose features AUTHOR, and Weakly-Supervised Learning (WSL), which pairs audio with text for tasks like transcription and translation AUTHOR. The general-purpose knowledge of LSMs makes them effective feature extractors for SER. Research has adapted LSMs for SER in English AUTHOR, but efforts for other languages are limited, focusing on Wav2Vec 2.0 AUTHOR for cross-lingual SER AUTHOR. This study examines how effective LSMs are as feature extractors for cross-lingual SER, using nine datasets across eight languages: Italian, German, French, Canadian French, Spanish, Greek, Persian, and Egyptian Arabic. Specifically, we utilize LSMs from the Wav2Vec 2.0 and Whisper AUTHOR model families, pre-trained with SSL and WSL approaches, respectively. We introduce Whisper due to its underexplored use in cross-lingual SER. To assess the effectiveness of LSMs as feature extractors, we test three classifiers of increasing complexity—Linear, Non-Linear, and Multi-Layer—across nine datasets. This evaluation determines which classifier best suits each LSM across different languages. Moreover, our study includes both English-Only and Multilingual models from the Wav2Vec 2.0 and Whisper families, aiming to evaluate the effectiveness of multilingual pre-training for cross-lingual SER. The main contributions of this work are: 1) We evaluate LSMs from the Wav2Vec 2.0 and Whisper models as feature extractors for cross-lingual SER across eight languages. 2) We test three types of downstream classifiers—Linear, Non-Linear, and Multi-Layer—and find that Whisper models' last Transformer layer features are well-suited for a Linear classifier, whereas Wav2Vec 2.0 models perform better with features from the middle and early Transformer layers.3) We compare English-Only and Multilingual LSMs, revealing that Whisper models benefit from multilingual pre-training performing best on Italian, Spanish, Canadian French, French, and German and competitively on Greek, Egyptian Arabic, Persian. Conversely, English-Only Wav2Vec 2.0 models surpass multilingual XLS-R in most languages, achieving the highest performance in Greek, Egyptian Arabic.
This paper examines the capabilities of Wav2Vec 2.0 and Whisper models as feature extractors for cross-lingual SER across eight languages, considering both English-Only and Multilingual variants. Our findings reveal that LSMs are effective feature extractors compared to a full Transformer baseline trained from scratch. We observe that Whisper models encode acoustic information primarily in the features of the last Transformer layer, whereas Wav2Vec 2.0 models rely on features from middle and early layers. Furthermore, we show that multilingual pre-training benefits Whisper models, leading to strong performance in Italian, Canadian French, French, Spanish, German, and competitive results in Greek, Egyptian Arabic, and Persian. In contrast, English-Only Wav2Vec 2.0 models outperform their multilingual counterpart, XLS-R, in most languages, achieving top performance in Greek and Egyptian Arabic. We attribute the disparity in multilingual pre-training effectiveness to the differences between SSL and WSL strategies, which should be explored further.
13
Multimodal
640_2024
2,024
Pierluigi Cassotti, Pierpaolo Basile, Nina Tahmasebi
DWUGs-IT: Extending and Standardizing Lexical Semantic Change Detection for Italian
ENG
3
1
0
University of Gothenburg, Università di Bari Aldo Moro
2
1
0
2
Pierluigi Cassotti, Nina Tahmasebi
0
0
Italy, Sweden
Bari, Gothenburg
Lexical Semantic Change Detection (LSCD) is the task of determining whether a word has undergone a change in meaning over time. There has been a marked increase in interest in this task, accompanied by a corresponding growth in the scientific community involved in developing computational approaches to semantic change. In recent years, a number of resources have been made available for the evaluation of LSC models in a number of languages, including English, Swedish, German, Latin, Russian and Chinese. DIACR-ITA is the only existing resource for LSCD in Italian. However, DIACR-ITA has a different format from that used for other languages. In this paper, we present DWUGs-IT, which extends the DIACR-ITA dataset with additional target words and usage-sense pair annotations and adapts it to the DURel format, including the first implementation of a LSCD graded task for Italian.
As is the case with both society and culture, language is subject to change over time. Two key factors cause such linguistic change. Firstly, there are purely evolutionary and linguistic considerations driven by the need for more efficient communication AUTHOR. One example of this is the use of abbreviations and acronyms, such as LOL (Laughing Out Loud), which have become commonplace on social media platforms. Secondly, changes in society and culture lead to changes in language. This can be seen, for example, in the adoption of a more inclusive language, as evidenced by grammatically gendered languages, including Italian and the introduction of {\schwa} to replace masculine and feminine endings AUTHOR. Language may undergo alteration at various levels, including morphological, syntactic, and semantic. Semantic change concerns the alteration of the meaning of words over time. The study of semantic change is a prominent area of research in Historical Linguistics, with the aim of investigating the linguistic mechanisms that characterize the change and the causes that trigger it. For instance, \citet{blank2012prinzipien} provides a broad study on the characterization of semantic change, identifying a number of different types of change, including metaphor, metonymy, generalization, specialization, co-hyponym transfer and auto-antonym. The English word bad, for example, has acquired an auto-antonym meaning, i.e. a meaning that is the opposite of its original meaning. In addition to its original connotation of poor quality or negative, it has also acquired the opposite connotation of good or cool. The term meat has undergone a process of specialization in its meaning, whereby it has shifted from referring to any kind of food in general to exclusively denoting the meat of animals consumed as food. While traditional linguistic methods are informative, they are often based on small, carefully curated samples. In contrast, linguistic analyses using computational models not only accelerate our understanding of language change but also provide broader and more detailed insights, thereby facilitating the study of vast corpora across a wider range of genres and time AUTHOR. From a computational perspective, two key challenges emerge in the study of semantic change: the modelling of word meanings over time and the detection of change AUTHOR. At the synchronic level, ignoring the temporal dimension with a focus on modern corpora, the Natural Language Processing community has made significant strides in modelling word meanings, with approaches such as Word Sense Disambiguation (WSD) AUTHOR playing a pivotal role. Computational modelling of semantic change introduces a significant level of complexity, as it necessitates the handling of meanings that are either extinct or novel in comparison to existing lexicographic resources, such as WordNet, as well as dynamically changing meaning representations. In recent years, great efforts have been made to advance the field of computational methods for Lexical Semantic Change Detection. With initiatives such as the Workshop on Computational Approaches to Historical Language Change AUTHOR promoting research in this field or shared tasks such as SemEval 2020 Task 1 AUTHOR, RuShiftEval AUTHOR, DIACR-ITA AUTHOR, or LSCD Discovery AUTHOR leading to the development of the first evaluation resources. DIACR-Ita, hosted in EVALITA 2020 AUTHOR, is the first shared task specifically created for the evaluation of models for Lexical Semantic Change in Italian. The majority of the evaluation resources follow a two-task approach: (1) a binary task, which requires the assignment of a word to either the changed or stable label, based on whether the word has undergone a change in meaning or not; and (2) a graded (ranking) task, which requires the sorting of words based on the extent of their change (over time). These labels are assigned on the basis of human-annotated data, typically in the form of a graded word-in-context task. %or something about how the "rest" of the data looks like because in the end of the next line you say that Diacr-ita does not release this data. Then it is good to know what it is and what it is good for. DIACR-Ita, however, diverges from the evaluation process employed in SemEval 2020 Task 1, RushiftEval and several other datasets that emerged subsequently. This results in a distinct configuration of the task and the released data. For example, DIACR-Ita only has a binary task but does not include a graded task. Moreover, only the target words with their gold truth labels were made available for the shared task, while the remaining data produced during the annotation process were not. In this paper, \item we release DWUGs-IT .}, a new dataset for Lexical Semantic Change Detection for Italian, which: \item extends the original DIACR-ITA %above you use DIACR-Ita not ITA with 12 new words; \item provides sense-annotated usages with the respective sense labels \item standardizes DIACR-ITA providing the data in the DURel format AUTHOR \item introduces the first LSC graded task for Italian \item we evaluate DWUGs-IT using XL-LEXEME AUTHOR, the state-of-the-art model for Lexical Semantic Change Detection AUTHOR
This paper presents DWUGs-IT, an extension and standardization of the Lexical Semantic Change Detection (LSCD) task for Italian, based on the existing DIACR-ITA dataset. The dataset is expanded with additional target words and its format is aligned with that of the resources used for other languages. This involves the introduction of the first graded task for Italian. The standardized dataset and the evaluation framework we provide can serve as a foundation for future research in LSCD for Italian. By aligning the Italian dataset with those of other languages, we facilitate cross-linguistic comparisons and contribute to the broader understanding of semantic change mechanisms. In addition, we provide a first evaluation of the state-of-the-art LSCD model, XL-LEXEME, for Italian and both show its effectiveness as well as set a baseline for future work. \noindent This work has in part been funded by the research program Change is Key! supported by Riksbankens Jubileumsfond (under reference number M21-0021). The computational resources were provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no. 2022-06725. \noindent We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU. \noindent We would also like to thank Tommaso Caselli, Annalina Caputo and Rossella Varvara, who contributed to the development of the DIACR-ITA resource, and Dominik Schlechtweg for valuable feedback on a preliminary draft of this work.
5
Latin Resources
641_2024
2,024
Cristiano Ciaccio, Felice Dell'Orletta, Alessio Miaschi, Giulia Venturi
Controllable Text Generation To Evaluate Linguistic Abilities of Italian LLMs
ENG
4
1
0
CNR-ILC
1
0
0
0
0
0
0
Italy
Pisa
State-of-the-art Large Language Models (LLMs) demonstrate exceptional proficiency across diverse tasks, yet systematic evaluations of their linguistic abilities remain limited. This paper addresses this gap by proposing a new evaluation framework leveraging the potentialities of Controllable Text Generation. Our approach evaluates the models' capacity to generate sentences that adhere to specific linguistic constraints and their ability to recognize the linguistic properties of their own generated sentences, also in terms of consistency with the specified constraints. We tested our approach on six Italian LLMs using various linguistic constraints.
Large-scale Language Models (LLMs) AUTHOR have exhibited extraordinary proficiency in a wide range of tasks, from text generation to complex problem-solving, by producing coherent and fluent texts AUTHOR. Their ability to understand context, generate human-like responses, and even engage in creative tasks underscores their potential in various applications. Such capabilities have been extensively evaluated against several benchmarks, as evidenced by the success of platforms such as the OpenLLM Leaderboard AUTHOR or Italian LLM-Leaderboard AUTHOR, specifically developed to evaluate Italian models. %in a task-oriented scenario covering a wide range of NLP tasks. However, despite their impressive capabilities, the evaluation of LLMs' linguistic abilities when generating sentences remains an understudied topic. In fact, while earlier works have demonstrated the implicit encoding of many linguistic phenomena within the representations of smaller models AUTHOR or by prompting LLMs to assess their linguistic competence AUTHOR, there is no guarantee that generative LLMs can comply with such properties in generating texts. Studies on Controllable Text Generation (CTG) indirectly assessed models' capabilities by examining their adherence to linguistic constraints AUTHOR. For instance, AUTHOR studied the abilities of LLMs in adhering to lexical and morpho-syntactic constraints %for the task of personalized text generation. when generating personalized texts. Nevertheless, these works are mainly focused on task-oriented scenarios (e.g. text simplification) and therefore they do not provide systematic evaluations of the linguistic abilities of these models. [t!] \centering \includesvg[width=0.48\textwidth]{frame-example.svg From a complementary perspective, in recent years, several works have proposed diverse approaches to assess the consistency of LLMs as an essential component of the models' evaluation AUTHOR, where consistency can be defined as ``the requirement that no two statements given by the system are contradictor'' AUTHOR or "the invariance of its behaviour under meaning-preserving alternations in its input" AUTHOR. Despite their differences, all these approaches aim to understand the reasoning processes that the models employ in various reasoning tasks AUTHOR while also measuring the predictability and coherence of the models' generated responses under different conditioning inputs. Among these, AUTHOR studied the consistency between generation (e.g.\ ``what is 7+8?'') and validation (e.g. "7+8=15, True or False?'') of LLMs considering 6 different tasks (e.g. arithmetic reasoning, style transfer). AUTHOR, instead, employed several consistency checks to measure models' faithfulness and to understand whether self-explanations truly reflect the model’s behaviour. Importantly, the training procedure of an LM does not explicitly target consistency AUTHOR, meaning this ability to produce non-contradictory statements eventually emerges as a byproduct of pre-training and fine-tuning. Therefore, studying models under such conditions serves as a valuable proxy for evaluating their capacity to handle different but complementary tasks, such as generation vs. validation. In this paper, we bring together the two perspectives and propose an evaluation approach to thoroughly test the linguistic abilities of several Italian LLMs. Specifically, by instructing a model to generate sentences that adhere to a set of targeted linguistic constraints (e.g.\ ``Generate a sentence with 2 adjectives'') and then asking to validate its own sentences ("How many adjectives does this sentence have: <s>?"), %it's possible to gather evidence about different phenomena: we seek to answer the following research questions: i) To what extent is an Italian LLM capable of generating sentences that adhere to specific linguistic constraints? ii) How consistent are LLM's responses to the validation questions w.r.t. the specified linguistic constraints? iii) How well can Italian LLMs recognize the linguistic features present in their own generated sentences? \noindent{Contributions}. Our main contributions are: \item We propose a framework for evaluating the linguistic abilities of state-of-the-art Italian LLMs when generating text. \item We conduct extensive evaluations across different models and linguistic constraints. \item We assess models' consistency with the requested constraints and their ability to validate their own generated content.
In this paper, we presented the results of a new framework to extensively evaluate the linguistic abilities of Italian LLMs when generating sentences according to multiple linguistic constraints and, subsequently, when validating the linguistic properties of their own outputs. Results showed that models' architectures and dimensions of pre-training data have an impact on their ability to correctly follow the constraints, with ANITA being the best-performing model across all configurations. When validating each model against their own generated sentences, we noticed that i) LLMs tend to be more consistent with the requested constraints when they correctly followed them during the generation phase, and ii) the generation abilities do not always align with the ability of the models to recognize the linguistic properties of their generated sentences. Our findings also highlighted that the evaluation metric chosen can significantly affect the results, underscoring the complexity of evaluating LLMs and the necessity for further research in this direction. Considering that the evaluation of LLMs is an ongoing and multifaceted effort across all languages, we believe that this study opens the way for numerous further in-depth analyses focused on various aspects of evaluation. Among other aspects, we could evaluate the overall quality of the generated sentences, which we have not accounted for so far. Preliminary investigations revealed that the overall quality of the generations varies across Italian LLMs, with Italia appearing to be the most fluent.}. Thus, future research should also involve a more comprehensive evaluation that compares the linguistic abilities of LLMs with their fluency and grammaticality. %A further direction includes the analysis of the impact of multiple annotation schemas as a means to evaluate the linguistic abilities of the models. This work has been supported by: \noindent {0.1\linewidth \includegraphics[scale=0.05]{fair.jpg \hspace{0.01\linewidth {0.70\linewidth FAIR - Future AI Research (PE00000013) project under the NRRP MUR program funded by the NextGenerationEU. \hspace{0.01\linewidth {0.1\linewidth \includegraphics[scale=0.08]{eu.pdf \\ \noindent \hspace{0.01\linewidth {0.70\linewidth TEAMING-UP - Teaming up with Social Artificial Agents project under the PRIN grant no.\ 20177FX2A7 funded by the Italian Ministry of University and Research. \hspace{0.01\linewidth \clearpage \appendix
1
Language Models
642_2024
2,024
Fabio Pernisi, Giuseppe Attanasio, Debora Nozza
MONICA: Monitoring Coverage and Attitudes of Italian Measures in Response to COVID-19
ENG
3
1
0
Università Bocconi, Instituto de Telecomunicações
2
1
0
1
Giuseppe Attanasio
1
Giuseppe Attanasio
Italy, Portugal
Milan, Lisboa
Modern social media have long been observed as a mirror for public discourse and opinions. Especially in the face of exceptional events, computational language tools are valuable for understanding public sentiment and reacting quickly. During the coronavirus pandemic, the Italian government issued a series of financial measures, each unique in target, requirements, and benefits. Despite the widespread dissemination of these measures, it is currently unclear how they were perceived and whether they ultimately achieved their goal. In this paper, we document the collection and release of \monica, a new social media dataset for MONItoring Coverage and Attitudes to such measures. Data include approximately ten thousand posts discussing a variety of measures in ten months. We collected annotations for sentiment, emotion, irony, and topics for each post. We conducted an extensive analysis using computational models to learn these aspects from text. We release a compliant version of the dataset to foster future research on computational approaches for understanding public opinion about government measures. We release data and code at .
Understanding public opinion on governmental decisions has always been crucial for assessing policies' effectiveness, especially when facing exceptional events requiring prompt decisions. Computational linguistics and social scientists have long observed modern social media platforms as they are a perfect stage for spreading opinions swiftly and transparently. Natural Language Processing (NLP) techniques have been widely used for analyzing public discussion \citep[e.g.,][]{medhat2014sentiment,giachanou2016like,qian2022understanding}. The COVID-19 pandemic, arguably the most prominent of such exceptional events, prompted the Italian government---and other European governments---to release multiple financial measures to cushion the impact on the population. These so-called ``bonuses,'' issued pro bono, i.e., with no interest payments from recipients, aimed at increasing liquidity and reducing tax burdens. However, despite reaching varied recipients, comprehending the measures' reception and evaluating their effectiveness still needs to be explored. To address this gap, we collect and release \monica, a new social media dataset for MONItoring Coverage and Attitudes of Italian measures to COVID-19. \monica comprises approximately 10,000 posts spanning ten months collected on X.com. These posts pertain to the Italian public's discussions on diverse financial measures introduced during the pandemic. Building on an extensive body of literature that examines public sentiment during the pandemic \citep[e.g.,][]{muller2023covid,chen2020tracking,kaur2020monitoring,scott2021measuring,wang2020covid}, this work offers new insights into the limited research specifically addressing Italy. for one of the early (and few) works on modelling sentiment from Twitter during the COVID-19 outbreak. This paper details the dataset's collection and release. It introduces the annotations we compiled for each post, including sentiment, emotion, irony, and discussion topics. Then, we conducted an analysis using traditional models and transformer-based language models to predict these aspects from textual data, demonstrating the dataset's potential usability. Moreover, using state-of-the-art interpretability tools, we explained the models' decision processes. We found that explanations are faithful and plausible to human judgments. \monica will allow a retrospective examination of the efficacy -- and inefficacy -- of governmental measures implemented in Italy during the COVID-19 pandemic, as perceived by the population. By doing so, we seek to provide insights that can inform policymakers about the strengths and weaknesses of such financial measures, ensuring better preparedness and response strategies for any future crises. \paragraph{Contributions.} We release \monica, a GDPR-compliant dataset of social media posts to monitor the coverage and people's attitude towards Italy's government's financial aid to combat the COVID-19 crisis. We collect annotations of several aspects to allow for a finer-grained analysis. We used state-of-the-art NLP and interpretability tools and reported key insights on public sentiment.
We documented the collection and release of \monica, the first large-scale dataset for monitoring the coverage and attitudes of financial aid enacted by the Italian government during the COVID-19 pandemic. It counts around 10,000 annotated posts for subjectivity, sentiment, emotion, irony, and topic. We conducted a first analysis and discovered that (1) most posts have a negative tone and (2) NLP and machine learning models can help detect it. Finally, we conducted a preliminary explainability study to understand how models predict sentiment from text. We found that explanation quality varies across methods and recommended LIME as a sensible starting choice. Our dataset and study fill a critical research gap by examining Italian public sentiment towards COVID-19 measures. Future research will build on this groundwork to build more effective opinion monitoring and mining tools and ultimately inform prompt and targeted policy decisions. Additionally, to better understand the severity of negative attitude, future research may concentrate on examining hate speech in relation to public policies during the pandemic in Italy AUTHOR.
6
Sentiment, Emotion, Irony, Hate
643_2024
2,024
Sofia Lugli, Carlo Strapparava
Multimodal Chain-of-Thought Prompting for Metaphor Generation
ENG
2
1
1
Università di Trento, Fondazione Bruno Kessler
2
0
0
0
0
0
0
Italy
Trento
This paper introduces an exploratory approach in the field of metaphorical and visual reasoning by proposing the Multimodal Chain-of-Thought Prompting for Metaphor Generation task aimed to generate metaphorical linguistic expressions from non-metaphorical images by using the multimodal LLaVA 1.5 model and the two-step approach of multimodal chain-of-thought prompting. The generated metaphors were evaluated in two ways: using BERTscore and by five human workers on Amazon Mechanical Turk. Concerning the automatic evaluation, each generated metaphorical expression was paired with a corresponding human metaphorical expressions. The overall BERTscore was the following: precision= 0.41, recall= 0.43, and F1= 0.42, suggesting that generated and human metaphors might not have captured the same semantic meaning. The human evaluation showed the model’s ability to generate metaphorical expressions, as 92% of them were classified as metaphors by the majority of the workers. Additionally, the evaluation revealed interesting patterns in terms of metaphoricity, familiarity and appeal scores across the generated metaphors: as the metaphoricity and appeal scores increased, the familiarity score decreased, suggesting that the model exhibited a certain degree of creativity, as it has also generated novel or unconventional metaphorical expressions. It is important to acknowledge that this work is exploratory in nature and has certain limitations.
The scope of this paper is to introduce an alternative approach to multimodal metaphor generation. As metaphors are not only pervasive in language but also in everyday life, influencing our thoughts and actions AUTHOR, and as human meaning representations relies on multiple modalities AUTHOR, it became relevant to study metaphors in more than one modality, in particular in the vision domain. Recent research has indeed explored multimodal metaphors generation in a variety of ways: from visual metaphor to literal language AUTHOR; and from metaphorical language to visual metaphor AUTHOR. Nevertheless, the common aspect across these studies is that the metaphorical quality was already present either in the linguistic or in the visual input employed. Therefore, this paper proposes an alternative approach that involves generating metaphorical linguistic expressions from non-metaphorical images, which lack inherent metaphorical qualities. To accomplish this, we employed the new multimodal model LLaVA 1.5 AUTHOR and adopted a two-step approach known as multimodal chain-of-thought prompting AUTHOR: given the first prompt, the model generates the content of the picture; then, the model is provided with both the generated output and a specific prompt to facilitate metaphor generation. The metaphors generated by the model were evaluated through BERTscore AUTHOR and by human workers on Amazon Mechanical Turk. The results show the model's ability to generate metaphorical expressions, with 92% of the generated expressions being classified as metaphors. Additionally, the evaluation revealed interesting patterns in terms of the metaphoricity, familiarity and appeal scores of the generated expressions. Interestingly, as the metaphoricity score increases, the familiarity score decreases while the appeal score increases. This suggests that the model was able to create novel or uncommon metaphorical expressions which may differ from the more conventional metaphors, which the evaluators might have been more familiar with. Despite being less familiar, the metaphorical expressions were preferred over the non-metaphorical ones. It is important to acknowledge that this is an exploratory work, which aims to offer a different approach in multimodal metaphor generation. As such, it is essential to point out the presence of some limitations, in particular concerning the choice of the visual inputs and the constraints of human evaluation.
This study aimed to explore an alternative approach for multimodal metaphor generation using the new LLaVA 1.5 model and Multimodal-CoT prompting. The results showed the model's ability to generate metaphorical expressions when provided with both linguistic and visual inputs which lack inherent metaphorical qualities. Additionally, the evaluation revealed interesting patterns across the metaphoricity, familiarity and appeal scores of the generated expressions. The model exhibited its creativity, as it generated novel or unconventional metaphorical expressions, which were also preferred over non-metaphorical ones. It is important to state again that this is an exploratory work with some limitations. One limitation to consider is the choice of the images used in the study. As manually selected from Google Image, their quality may influence the quality of the captions and metaphors generated by the model. Another limitation to consider is the subjectivity of the evaluation process, it is possible that Amazon MTurk workers may lack the necessary sensitivity and background knowledge to accurately recognize and evaluate metaphorical expressions, despite the instructions included background information about metaphor. Future works should aim to address these limitations by selecting more accurate images, as well as incorporating more diverse and expert annotators. Despite these limitations, the task show promising results for future research in the field of metaphorical and visual reasoning. \appendix
13
Multimodal
644_2024
2,024
Alessandro Giaconia, Valeria Chiariello, Sara Giannuzzi, Marco Carlo Passarotti
Topic modeling for auditing purposes in the banking sector
ENG
4
2
0
Università Cattolica del Sacro Cuore, CREDEM
2
0
0
0
0
2
Valeria Chiariello, Sara Giannuzzi
Italy
Milan, Reggio Emilia
This study explores the application of topic modeling techniques for auditing purposes in the banking sector, focusing on the analysis of reviews of anti-money laundering alerts. We compare three topic modeling algorithms: Latent Dirichlet Allocation (LDA), Embedded Topic Model (ETM), and Product of Experts LDA (ProdLDA), using a dataset of 35,000 suspicious activity reports from an Italian bank. The models were evaluated using the coherence score, NPMI coherence, and topic diversity metrics. Our results show that ProdLDA consistently outperformed LDA and ETM, with the best performance achieved using 1-gram word embeddings. The study reveals distinct topics related to specific client activities, cross-border transactions, and high-risk business sectors, like gambling. These results demonstrate the potential of advanced topic modeling techniques in enhancing the efficiency and effectiveness of auditing processes in the banking sector, particularly in the analysis of activities that could be tied to money laundering and terrorism.
There has always been a close connection between banks and the collection of different kinds of empirical data: banks, just like any other company, have always poured large amounts of resources into understanding numbers, and how to deal with them. Numerical data, being closely related to the financial performances of companies, has always taken the spotlight. On the other hand, linguistic data has always been much less considered, due to the difficulties of analysis and underwhelming performances. But things are changing. More and more companies are understanding the value of language, which contains information that no number can convey. Different Natural Language Processing (NLP) tasks, language resources, and computational linguistics practices have now become a staple in many realities, like sentiment analysis AUTHOR and word embeddings AUTHOR. In fact, there is a wide variety of linguistic data that banks can exploit: emails, bank transfers descriptions, internal communications, and customer feedback. Some peculiar issues arise, when dealing with linguistic data in the banking sector, like the usage of acronyms, abbreviations and technical terminology. These data are often proprietary, meaning that the bank owns them, and the access is forbidden to externals. While the quantity of information they contain is massive, a downside is that the impossibility of sharing it with other banks hinders the possibility of a more global analysis. In this context, this paper wants to explore the application of topic modeling techniques to the auditing process, in particular regarding the analysis of reviews of anti-money laundering (AML) alerts. Topic modeling can, in fact, be an incredibly helpful tool for auditors who want to perform an in-depth analysis on large amounts of data. An overview of topic modeling algorithms and applications in the banking sector, both documented in scientific research and in concrete applications within banks, will be presented. Then, we will provide a comprehensive description of the data employed, followed by the preprocessing operations. We will then present the results and their interpretation, leading us into the conclusions. Finally, we will present a number of future works suggestions, which can expand this topic.
NLP is now an essential component of the banking sector, and any company that wants to be competitive should make use of linguistic data science. In particular, in this paper we presented a NLP task, topic modeling, and how it can be implemented in the daily job of bank employees, in order to perform more detailed investigations. In particular, topic modeling can be a key component in the understanding and identification of money laundering schemes, as it allows auditors to perform more in-depth and focused analyses. For example, auditors could investigate patterns from the recent years, in order to have a better understanding on whether an activity is part of a larger trend, or an anomaly that deserves attention. After citing other implementations of topic modeling in banking, we described the data employed, and its preprocessing, consisting in stopwords removal and lemmatization. Examples were provided, showing the peculiarities of the documents in the dataset. Then, the data was processed using three algorithms: LDA, ETM and ProdLDA. These algorithms were evaluated using three metrics: coherence score, NPMI score, and topic diversity. The optimal hyperparameters were found using SOBO. Optimization and processing were performed using four different configurations: without additional word embeddings, enhanced by 1-gram word embeddings created from our dataset, enhanced by 2-grams word embeddings created from our dataset, and enhanced by pre-trained word embeddings. The results show that ProdLDA's performance was far superior than its competition, especially when employing 1-gram Word2Vec embeddings. The algorithm outputted distinct and interpretable topics, which can provide a great insight into the data. This experiment also has a large potential of being expanded. In particular, future works could employ a more computationally performing machine, in order to make use of the whole dataset, as well as performing MOBO, and obtain more precise hyperparameters. Finally, it is also possible to perform the same analysis on different kinds of data, in order to notice more clearly the differences and similarities from one kind of linguistic data to another, and their similarities. There are also new techniques that could have a great impact on this research, such as LLMs, Attention-based topic modeling, and Contrastive topic modeling.
20
In-domain IR and IE
645_2024
2,024
Eliana Di Palma
ELIta: A New Italian Language Resource for Emotion Analysis
ENG
1
1
1
Sapienza Università di Roma, Università Roma Tre
2
0
0
0
0
0
0
Italy
Rome
Emotions and language are strongly associated. In recent years, many resources have been created to investigate this association and automatically detect emotions from texts. Presenting ELIta (Emotion Lexicon for Italian), this study provides a new language resource for the analysis and detection of emotions in Italian texts. It describes the process of lexicon creation, including lexicon selection and annotation methodologies, and compares the collected data with existing resources. By offering a non-aggregated lexicon, ELIta fills a crucial gap and is applicable to various research and practical applications. Furthermore, the work utilises the lexicon by analysing the relationships between emotions and gender.
Emotions and language are deeply interrelated human characteristics. Language serves as a tool to communicate our feelings, while affective studies have shown that emotion permeates all aspects of language AUTHOR, such as morphology AUTHOR, phonology AUTHOR, and semantics AUTHOR. This intricate relationship has recently attracted significant attention in fields such as computational linguistics, natural language processing (NLP), and affective computing. Research focusing on the identification of emotions from texts has produced various language resources, particularly emotion lexicons developed using diverse annotation methodologies, ranging from manual AUTHOR to automatic AUTHOR, and from expert judgment AUTHOR to crowdsourcing AUTHOR. Most studies follow the dimensional approach to emotions AUTHOR. According to this perspective, the PAD (Pleasure, Arousal, Dominance) AUTHOR or VAD (Valence, Arousal, Dominance) AUTHOR model posits that the fundamental dimensions of valence (the intrinsic attractiveness (positive valence) or aversion (negative valence) of an event, object or situation), arousal (the level of physiological activation, ranging from sleep to excitement) and dominance (the degree of control a person feels over a situation) explain the majority of the emotional meaning of words. This approach has been highly productive for research on emotional language and the creation of language resources, exemplified by the ANEW (Affective Norms for English Words) AUTHOR, NRC VAD AUTHOR, and the EmoBank corpus AUTHOR. Alternatively, some researchers argue for the existence of a limited number of discrete primary emotion categories that have evolved to serve various adaptive functions through specific neural signatures, facial expressions, cognitive evaluations, and behavioral action tendencies AUTHOR. These basic emotions typically include joy, sadness, disgust, anger, fear, surprise, whereas \citeauthor{plutchik1980general} also considers trust and anticipation. Despite objections to the basic emotions model AUTHOR, it has inspired the creation of resources such as the NRC Lexicon (EmoLex) AUTHOR (translated into over 100 languages, it's the most widely used lexicon in emotion detection), and the datasets Feel It AUTHOR and Multiemotion It AUTHOR. More recently, the field of computational linguistics and NLP has recognized the need for resources specifically created for languages other than English. Critics argue against relying solely on translations, advocating for lexicons created from texts in the target language and manually annotated AUTHOR. This approach has led to the development of lexicons like the Portuguese emotional lexicon AUTHOR, which embodies the principle of "each language for itself." For the Italian language several language resources with emotional annotations have been produced over the years. The initial ItEM lexicon AUTHOR began by collecting seed words through an association task linking words to labels (\citeauthor{plutchik1980general}'s basic emotions), then employed cosine similarity to expand the lexicon, assuming that neighboring words in semantic space share similar emotional connotations. The results, validated through crowdsourcing, showed low reliability for the emotions trust, anticipation (translated as ‘attese') and surprise. The more recent Depeche Mood ++ AUTHOR was automatically created from judgements given by readers of articles on the ‘Corriere della Sera’ newspaper website and uses a unique scale of emotions not directly comparable to others, such as ANNOYED, AFRAID, SAD, AMUSED, and HAPPY. AUTHOR. In the case of Affective Norms AUTHOR, the tendency to create resources by adapting the English model with annotations in L1 languages other than English has resulted in Affective Norms for several languages, including Spanish AUTHOR and Dutch AUTHOR. For Italian, there has been a specific adaptation of the ANEW collected by AUTHOR. Despite the existing resources in the literature, a notable gap persists. There is a lack of manually annotated Italian language resources that combine both discrete emotion annotations and dimensional evaluations. Furthermore, no available resource provides a non-aggregated version of the data. This paper presents ELIta (Emotion Lexicon for Italian), an innovative resource designed for the analysis of emotions in the Italian language and emotion detection from text. ELIta aims to bridge this gap by providing a lexicon annotated using both categorical and dimensional approaches, and by offering a non-aggregated version of the data. This aligns with the perspectivist viewpoint, which values disagreement as valuable information AUTHOR. The development process of ELIta, including lexicon selection, annotation methodologies, and a comparative analysis with existing Italian sentiment lexicons, is thoroughly described. Finally, analyses of the relationships between emotions and between dimensions and gender are presented.
This research introduces a new lexicon for Italian that collects word-emotion associations. Notably, it is the first lexicon, to the authors' knowledge, to be annotated using both categorical and dimensional approaches. Furthermore, it offers an innovative non-aggregated version of the data, reflecting a ‘perspectivist’ approach that values disagreement as valuable information, such as women showing a greater tendency towards negative valence and higher arousal ratings than man. Analyses using correlations between basic emotions and dimensions, along with comparisons to existing resources such as ANEW, underscore the lexicon's potential to deepen our understanding of the interplay between emotions and language. While ELIta represents a significant step forward in capturing the complexity of emotion-language interactions in Italian, continued development will be essential to addressing its current limitations and maximizing its utility as a comprehensive tool for emotional analysis. \appendix
6
Sentiment, Emotion, Irony, Hate
646_2024
2,024
Paolo Gajo, Alberto Barrón-Cedeño
On Cross-Language Entity Label Projection and Recognition
ENG
2
0
0
Università di Bologna
1
0
0
0
0
0
0
Italy
Forlì
Most work on named entity recognition (NER) focuses solely on %named entities using English. %models and corpora. Through the use of training data augmentation via machine translation (MT), multilingual NER can become a powerful tool for information extraction in multilingual contexts. In this paper, we augment NER data from culinary recipe ingredient lists by means of MT and word alignment (WA), following two approaches: \Ni translating each entity separately, while taking into account the full context of the list and \Nii translating the whole list of ingredients and then aligning entities using three types of WA models: Giza++, Fast Align, and BERT, fine-tuned using a novel entity-shuffling approach. We depart from English data and produce Italian versions via MT, span-annotated with the entities projected from English. Then, we use the %synthetic data produced by the two approaches to train mono- and multilingual NER BERT models. We test the performance of the WA and NER models on an annotated dataset of ingredient lists, %obtained from Italian partially out-of-domain compared to the training data. The results show that shuffling entities leads to better BERT aligner models. The higher quality NER data created by these models %in turn enables NER models to achieve better results, with multilingual models reaching performances equal to or greater than their monolingual counterparts.
Named entity recognition (NER) is a sequence labeling task with a long history of works mainly focusing on the recognition of %named entities such as people, locations, and organizations. Multilingual NER has also attracted research efforts, with %the two most recent SemEval campaigns including tasks on multilingual complex NER (MultiCoNER) AUTHOR. Despite its popularity %of the task and various mono- and multilingual NER datasets being available, specific domains such as the culinary one likely require new annotated data. In addition, NER is often the first step in information extraction for knowledge graph construction and, to the best of our knowledge, all literature in the domain of cuisine on this topic solely focuses on English data AUTHOR. Therefore we argue that, given cuisine's multicultural nature, more research in this direction is warranted. Entity label projection AUTHOR aims to address this scarcity by automating the data generation process for NER. This task consists in taking the labels associated with spans from a source text and automatically applying them to its translation in another language, i.e. the target text. Through this task, we attempt to find an efficient automatic way of developing models for entity projection across languages to produce high-quality multilingual data for recipe Named Entities (r-NE) AUTHOR. Departing from an English-language dataset containing ingredients from culinary recipes, annotated at the span level with entity category labels, we first rely on a MT engine to translate each source entity s_i individually into Italian, while keeping the full context into account. This results in a first entity-wise (EW) translated EN--IT--ES dataset where entities are linked across languages.% . Using these synthetic alignments, we train BERT models to align source and target entities, shuffling the latter %in order to prevent the model from learning to simply predict the original entity order. We then test the models on two novel entity alignment datasets, partially out-of-domain compared to the training data, e.g., as regards the used food products, units of measure, and cooking processes. As baselines to evaluate the BERT alignment models, we use Giza++ AUTHOR and Fast Align AUTHOR, two statistical word alignment (WA) models. In order to produce higher-quality r-NE data, we %then machine translate %MT the ingredient lists across their whole length, predicting target entity spans %by using with the best BERT models %obtained from the previous step, along with the baseline models. We thus obtain various sentence-wise (SW) translated datasets in Italian, trading some alignment accuracy for better translations. Both types of training data, EW and SW, are then used to fine-tune mono- and multilingual BERT NER models on the task of recognizing entities in food recipes. The %NER models are trained on various combinations of mono- and multilingual data and are tested on the entity annotations from the two aforementioned novel testing datasets. Our contribution is %thus three-fold: % \item \Ni We show the efficacy of fine-tuning alignment models by shuffling entities in contexts where most of the information depends on the presence of lexical items rather than the dependencies linking them. % \item \Nii We %demonstrate showcase the performance delta between mono- and multilingual NER models when fine-tuning %them on the synthetic data produced by our alignment. %approach. These %multilingual models can be used to %automatically label large datasets in multiple languages at a finer granularity level compared to currently available monolingual resources. % \item \Niii We release %all code and data %hereby presented, to produce %more training data in multiple languages.% The rest of the paper is structured as follows. Section~sec:related-word presents relevant past research on the subjects of cross-lingual entity alignment and recognition. Section~sec:data introduces the datasets and corpora used in the experiments, along with their annotation process. Section~sec:models presents architecture, training, and evaluation details for the models comprising our pipeline. Section~sec:experiments-results discusses the conducted experiments and their results. Finally, Section~sec:conclusions summarizes the paper and draws conclusions. Appendix~app:es shows further results including Spanish. Appendix~app:xlwa presents statistics and gives insight on the additional training data used. Appendix~app:comp-res lists information on the computational requirements.
We explored a simple novel technique to automatically generate high-quality multilingual NER data by combining machine translation and cross-language entity linking. For our experiments, we relied on the English-language \tasteset{} %a dataset, which includes %of recipes whose %containing lists of ingredients are span-annotated for entity recognition. Moreover, we manually curated a novel English--Italian cross-language dataset, %\pg{in English and Italian}, featuring the same kind of %entity annotation, with the addition of cross-language %entity alignments. We machine translated the entities %contained in \tasteset's recipes individually and shuffled them within ingredient boundaries. Leveraging this augmented data, we then fine-tuned BERT entity-alignment models. Using statistical word-alignment models as baselines, we tested these BERT models on our English--Italian parallel corpus. % manually curated by us, containing entity-aligned parallel instances. The results %of this experiment showed that models fine-tuned using our novel approach consistently outperform those trained on unshuffled data, along with %\pg{the two statistical baselines. We then created additional synthetic data by first translating \tasteset's recipes in their entirety, and then aligning the entities in the machine-translated target text using the best models obtained from the first part of the study. These data allowed us to obtain better NER models, compared to the ones we would have obtained by using the original recipes translated entity by entity. We tested monolingual English and Italian BERT models against \mbert, and showed that the latter is capable of obtaining the same performance as its monolingual counterparts when tested on monolingual NER data. In future work, we plan to extend the annotation of our datasets, both in terms of number of instances and annotators. We will also prioritize solving the token-to-character conversion issues encountered in this study. Furthermore, we plan to leverage this data augmentation technique in order to improve multilingual text-to-graph models, since all of the literature in this regard focuses on English-only data AUTHOR. \appendix
10
Machine Translation
647_2024
2,024
Irene Fioravanti, Luciana Forti, Stefania Spina
Automatic Error Detection: Comparing AI vs. Human Performance on L2 Italian Texts
ENG
3
3
1
Università per Stranieri di Perugia
1
0
0
0
0
0
0
Italy
Perugia
This paper reports on a study aimed at comparing AI vs. human performance in detecting and categorising errors in L2 Italian texts. Four LLMs were considered: ChatGPT, Copilot, Gemini and Llama3. Two groups of human annotators were involved: L1 and L2 speakers of Italian. A gold standard set of annotations was developed. A fine-grained annotation scheme was adopted, to reflect the specific traits of Italian morphosyntax, with related potential learner errors. Overall, we found that human annotation outperforms AI, with some degree of variation with respect to specific error types. We interpret this as a possible effect of the over-reliance on English as main language used in NLP tasks. We, thus, support a more widespread consideration of different languages.
Identifying errors in texts written by second language (L2) learners is a relevant task in several research areas, which can also have practical applications in a variety of fields. Error analysis is a traditional approach adopted in second language acquisition research for decades (Corder 1967), which learner corpus research has more recently revisited in light of the availability of learner corpora and corpus-based methods of analysis (Dagneaux et al. 1998). In addition, acquisitional research on learners’ errors has relevant pedagogical implications involving error-related feedback: appropriate corrective feedback can lead to improved writing skills in both L1 and L2 writing (Biber et al. 2011). Furthermore, automatic error detection and categorisation is key in language testing and assessment research and practice, with reference to automated essay scoring (e.g.,Song 2024), which has important implications for rubric descriptors. The interest of Natural Language Processing (NLP) in grammatical error correction (GEC) and grammatical error detection (GED) relies on the creation of systems used in Intelligent Computer-Assisted Language Learning (ICALL), Automated Essay Scoring (AES) or Automatic Writing Evaluation (AWE) contexts. ICALL systems integrate NLP techniques into CALL platforms, providing learners with flexible and dynamic interactions in their learning process. AES systems automatically grade written texts with machine learning techniques, as well as AWE systems, which also provide learners with feedback. Identifying and annotating errors in the performance of L2 learners, while beneficial for both pedagogical and research purposes, presents considerable challenges. This process is typically conducted manually in the case of learner corpora due to the inherent nature of errors as latent phenomena. The manual identification of learners’ errors requires a substantial degree of subjective judgment by human annotators (Dobrić 2023), as well as a considerable investment in terms of time. The present study aims to contribute to the evaluation of the performance of Large Language Models (LLMs) in the task of automatic grammatical error detection (GED) in written texts produced by L2 learners. In particular: 1. it evaluates the behaviour of different LLMs in relation to an error detection task in written texts produced by L2 learners of Italian, a language other than English, in line with recent studies criticising the over-reliance on English in NLP research (Søgaard 2022) and seeking to contribute to the very few studies that do consider languages other English (e.g., MultiGED-2023; Volodina et a. 2023); 2. it targets specific error types and grammatical categories in order to mitigate the problems arising from the broadness of the notion of error, focusing on clear-cut and possibly unambiguous error categories; 3. it relies on a high degree of accuracy in error annotation, which was manually performed by three researchers on a small learner dataset serving as the test set on which the systems are evaluated; 4. it assesses the performance of LLMs in error detection and categorisation, through a comparison with the performance of native Italian students and advanced learners of L2 Italian on the same task.
The main aim of our paper was to investigate whether AI can be a valid support for second language acquisition research, in learner error detection, with specific reference to a language other than English, i.e., Italian. Our study compared the performance of four LLMs among them and also compared with L1 and L2 annotators. A GS, produced by the annotations of three trained linguists, was adopted as benchmark. Given the richness of Italian morphosyntax and the variety of possible morphosyntactic errors that L2 Italian learners may produce, contrary to the few other studies on Italian, this study considered three different error types for two of the parts of speech listed in Table 1, i.e. article and preposition. This methodological novelty can potentially lead to much more fine-grained results, while counterbalancing, like in our case, the low number of annotated texts. The general finding about human annotators performing better than LLMs, both in terms of overall error detection and in terms of error type detection, is particularly significant if we consider the structural differences between English and other languages. Italian, like many other languages, is characterised by rich morphosyntatic traits, which inevitably have a considerable impact on the computational processing of L1 and L2 texts. Our findings may thus be a reflection of the well-known language bias in NLP, linked to the dominance of English, which then leads to a number of scientific but also social inequalities (Søgaard 2022; Volodina et al. 2023). Repeating the study with pre-trained LLMs might improve their performance. At present, pivotal tasks such as automatic error detection and classification, performed on a morphologically rich language such as Italian, does not seem to be viable with LLMs, as they do not add effectiveness to the same task performed manually.Future developments of this study may also include fine-tuned models, which are generally indicated as potentially better-performing than non-tuned ones (Kruijsbergen et al. 2024), as well as an increased number of annotated texts and an even more fine-grained and extended error annotation scheme. Automatic error detection and classification can be crucial for both the development of online language assessment systems and for second language acquisition research as a whole. This is especially true for languages other than English, which continue to be severely under-represented in all domains of language sciences, including NLP.
8
Learner Corpora and Language Acquisition
648_2024
2,024
Elio Musacchio, Lucia Siciliani, Pierpaolo Basile, Edoardo Michielon, Marco Pasqualini, Asia Beatrice Uboldi, Giovanni Semeraro
A study on the soundness of closed-ended evaluation of Large Language Models adapted to the Italian language
ENG
7
2
1
Università di Bari Aldo Moro, Università di Pisa, Fastweb SpA
3
0
0
0
0
3
Edoardo Michielon, Marco Pasqualini, Asia Beatrice Uboldi
Italy
Bari, Milan, Pisa
With the rising interest in Large Language Models, deep architectures capable of solving a wide range of Natural Language Generation tasks, an increasing number of open weights architectures have been developed and released online. %Due to the amount of data required for training from scratch such models, the majority of them have been trained in English. In contrast with older architectures, which were aimed at solving specific linguistic assignments, Large Language Models have shown outstanding capabilities in solving several tasks at once, raising the question of whether they can truly comprehend natural language. Nevertheless, evaluating this kind of capability is far from easy. One of the proposed solutions so far is using benchmarks that combine various types of tasks. This approach is based on the premise that achieving good performance in each of these individual tasks can imply having developed a model capable of understanding language. However, while this assumption is not incorrect, it is evident that it is not sufficient, and the evaluation of Large Language Models still remains an open challenge. %However, many of these pre-trained checkpoints have been trained mainly in English, meaning they lack the same linguistic skills for other languages. Because of this, many works have adapted these pre-trained models to other languages with fewer resources and evaluated them using state-of-the-art benchmarks. %Still, these benchmarks focus on closed-ended reasoning-based tasks that test the inherent knowledge inside a model. %We argue that it is not a correct practice to evaluate adapted models and study other approaches on the same benchmarks and datasets to assess whether our hypothesis is correct. In this paper, we conduct a study aimed at highlighting the potential and limitations of current datasets and how a new evaluation setting applied to language-adapted Large Language Models may provide more insight than traditional approaches.
Large Language Models (LLMs) are models based on the Transformer architecture capable of solving a wide variety of Natural Language Generation (NLG) tasks, even those not encountered during training, due to their extensive training and large number of parameters. Thanks to their remarkable skills, interest in LLMs is now at its climax, resulting in a proliferation of open-weight models (e.g. \textsc{LLaMA}, \textsc{Mistral}, and many others). Among the several challenges related to the development of LLMs, one of the most critical is their evaluation AUTHOR. One approach to tackle this issue has been to build benchmarks that collect different datasets, with the aim of obtaining a more comprehensive evaluation of the model's overall capabilities. Currently, there is a leaderboard} AUTHOR which keeps track of the capabilities of openly available LLMs. Specifically, the models are tested on six tasks that span different abilities a language model should have, e.g. reasoning or text completion. Regarding their reasoning abilities, the models are tested by solving closed-ended tasks. Specifically, multiple-choice question answering tasks are provided, where a question is given with a list of possible alternatives associated with an identifier (a letter, a number, and so on). Intuitively, since the model has also been pre-trained on closed-ended question-answering data, it should be able to generalize and understand the correct choice out of the available ones. Furthermore, rather than generating the output directly, the probabilities learned by the model are studied, using log-likelihood to assess which option is more likely to be correct. For the English language, this evaluation methodology has been a standard approach to assess the capabilities of LLMs. However, when adapting a model to a new language, due to the low amount of non-English data that has been used to pre-train such models, this methodology may not be as sound. The model only has to generate the correct option identifier, therefore this is not really testing the ability of the model of generating high-quality text in another language. The goal of this work is to understand whether a new evaluation setting applied to language-adapted LLMs may give more insight than the traditional approach. Therefore, our contributions are the following: \item We test two evaluation settings for language-adapted LLMs changing the structure of closed-ended question answering tasks; \item We evaluate the performance of state-of-the-art models on these settings; \item We study the sensitivity that the models have for the input prompt.
We have carried out a study on the effectiveness of evaluation of Italian-adapted LLMs on closed-ended tasks, multiple-choice question answering tasks specifically. We have experimented with two settings: an open-ended one and a closed-ended one without option identifiers. The results show better performance for the latter. Furthermore, they also show that, with respect to the Open Italian LLM Leaderboard, there are significant differences regarding model performance. We can conclude that the evaluation of Italian-adapted models should follow a more rigorous procedure which does not mainly rely on closed-ended tasks. We release the code that was used on GitHub}. In the future, we plan to further work on the topic and attempt to define best practices for the evaluation of these models. We acknowledge the support of the PNRR project FAIR - Future AI Research (PE00000013), Spoke 6 - Symbiotic AI (CUP H97G22000210007) under the NRRP MUR program funded by the NextGenerationEU.
1
Language Models
649_2024
2,024
Alice Fedotova, Adriano Ferraresi, Maja Miličević Petrović, Alberto Barrón-Cedeño
Constructing a Multimodal, Multilingual Translation and Interpreting Corpus: A Modular Pipeline and an Evaluation of ASR for Verbatim Transcription
ENG
4
2
1
Università di Bologna
1
0
0
0
0
0
0
Italy
Forlì
This paper presents a novel pipeline for constructing multimodal and multilingual parallel corpora, with a focus on evaluating state-of-the-art automatic speech recognition tools for verbatim transcription. The pipeline was developed during the process of updating the European Parliament Translation and Interpreting Corpus (EPTIC), leveraging recent NLP advancements to automate challenging tasks like multilingual alignment and speech recognition. Our findings indicate that current technologies can streamline corpus construction, with fine-tuning showing promising results in terms of transcription quality compared to out-of-the-box Whisper models. The lowest overall WER achieved for English was 0.180, using a fine-tuned Whisper-small model. As for Italian, the lowest WER (0.152) was obtained by the Whisper Large-v2 model, with the fine-tuned Whisper-small model still outperforming the baseline (0.201 vs. 0.219).
The present paper introduces a pipeline for the construction of multimodal and multilingual parallel corpora that could be used for translation and interpreting studies (TIS), among others. The construction of such resources has been acknowledged as a ``formidable task'' AUTHOR, which if automated ---as we propose--- involves a number of subtasks such as automatic speech recognition (ASR), multilingual sentence alignment, and forced alignment, each of which poses its own challenges. Yet tackling these subtasks also offers a unique way to evaluate state-of-the-art natural language processing (NLP) tools against a unique, multilingual benchmark. In this paper we discuss the development of a modular pipeline adaptable for each of these subtasks and address the issue of whether performing ASR with OpenAI’s Whisper AUTHOR could be suitable for verbatim transcription. We showcase the utility of this pipeline by expanding the European Parliament Translation and Interpreting Corpus (EPTIC), a multimodal parallel corpus comprising speeches delivered at the European Parliament along with their official interpretations and translations AUTHOR. The transcription conventions adopted for the compilation of EPTIC were developed ad hoc and aim at reproducing minimal prosodic features, but can still be considered an instance of verbatim transcription AUTHOR; the issue of what truly constitutes verbatimness is still an object of debate and will be further discussed. There is fairly widespread agreement on the statement that every transcription system reflects a certain methodological approach AUTHOR, and that by ``choosing not to transcribe a particular dimension, the researcher has implicitly decided that the dimension plays no role in the phenomenon in question'' AUTHOR. To investigate the characteristics of Whisper's AUTHOR transcriptions in English and Italian, we formulate the following two research questions: RQ1~Is it possible to use fine-tuning to adapt the transcription style to the one of an expert annotator? RQ2~What is the impact of speech type (native, non-native, interpreted) on transcription quality? We find that satisfactory results can be achieved with automatic speech recognition, although challenges remain, especially with regards to the verbatimness of the transcription ---a crucial factor in corpora intended for TIS\@. Fine-tuning Whisper-small on English data obtains a lower word error rate (WER) of 0.180 compared to Whisper-large v2 (0.194), potentially indicating that fine-tuning Whisper models holds promise for improving their performance in terms of adhering to a certain transcription style. However, this was not the case when considering the experiments based on Italian. In the Italian scenario, Whisper-large-v2 obtained a WER of 0.152 compared to a WER of 0.201 obtained by the fine-tuned Whisper-small model. It should be noted, however, that this constituted an improvement over the baseline Whisper-small model, which obtained a higher WER of 0.219. A significant limitation in the case of fine-tuning in Italian was constituted by the smaller amount of data available for tuning compared to English. Lastly, we find that sentence alignment can be facilitated through state-of-the-art embedding-based tools, whereas forced alignment can be considered a largely solved problem. This makes the construction of corpora such as EPTIC more streamlined and requiring less human intervention, with wider implications for multilingual corpus construction in the field of TIS and beyond.
This paper presented a novel pipeline for constructing multimodal and multilingual parallel corpora, with a focus on evaluating state-of-the-art automatic speech recognition tools for verbatim transcription. Experiments with Whisper models on EPTIC revealed robust performance across languages and speech types, particularly for English and Italian. However, some limitations remain regarding ASR performance and achieving verbatim transcriptions. Fine-tuning Whisper showed promising reductions in WER, particularly for English, indicating the potential of adapting the model to use a more verbatim style. Yet qualitative analysis revealed inconsistencies in handling disfluencies, truncations, and discourse markers. Furthermore, higher WERs for non-native and interpreted speech underscore remaining challenges. Future research efforts could explore incorporating additional metrics beyond WER to better capture the degree of verbatimness in the transcriptions, and expanding the Italian dataset to potentially improve the performance of the fine-tuned model. Another avenue for research could include augmenting the dataset with external data containing pairs of audio and verbatim transcripts, most notably the Switchboard corpus introduced in AUTHOR. Other methods besides fine-tuning could be explored to enhance the quality of transcriptions, for instance by leveraging the official verbatim reports on the European Parliament’s website. Lastly, a model could be developed for detecting the metadata item relative to the speech type, i.e.\ impromptu, read out, or mixed, based on textual or multimodal features. The work of A. Fedotova is supported by the NextGeneration EU programme, ALMArie CURIE 2021 - Linea SUpER, Ref. CUPJ45F21001470005. \appendix
10
Machine Translation
650_2024
2,024
Achille Fusco, Matilde Barbini, Maria Letizia Piccini Bianchessi, Veronica Bressan, Sofia Neri, Sarah Rossi, Tommaso Sgrizzi, Cristiano Chesi
Recurrent Networks are (Linguistically) Better? An Experiment on Small-LM Training on Child-Directed Speech in Italian
ENG
8
5
0
IUSS
1
0
0
0
0
0
0
Italy
Pavia
Here we discuss strategies and results of a small-sized training program based on Italian child-directed speech (less than 3M tokens) for various network architectures. The rationale behind these experiments [1] lies in the attempt to understand the effect of this naturalistic training diet on different models' architecture. Preliminary findings lead us to conclude that: (i) different tokenization strategies produce mildly significant improvements overall, although segmentation aligns more closely with linguistic intuitions in some cases, but not in others; (ii) modified LSTM networks (eMG-RNN variant) with a single layer and a structurally more controlled cell state perform slightly worse in training loss (compared to standard one- and two-layered LSTM models) but better on linguistically critical contrasts. This suggests that standard loss/accuracy metrics in autoregressive training procedures are linguistically irrelevant and, more generally, misleading since the best-trained models produce poorer linguistic predictions ([2], pace [3]). Overall, the performance of these models remains significantly lower compared to that of 7-year-old native-speaker children in the relevant linguistic contrasts we considered [4].
According to the mainstream LLM development pipeline, Transformer-based architectures [5] outperform sequential training models, like LSTM [6], in various NLP tasks. When small-sized training data are available, optimization becomes necessary [7], [8], but common optimization techniques neglect the linguistically relevant fact that these models (i) conflate semantic/world knowledge with morpho-syntactic competence, (ii) require unreasonable training data compared to that needed by children during language acquisition, (iii) the higher their performance, the lower their return in cognitive/linguistic terms [9]. In this paper we address these three issues, starting from the observation that while world knowledge uses all training data available, and the more the better, structural (morpho-syntactic and compositional semantic) knowledge might require a much smaller dataset (from 10 to 100 million words, according to [10]). We explore this intuition further and, based on prolific literature from the ‘80s showing that typical child errors are structurally sensitive and never random [11], we model networks’ architecture to bias learning towards plausible structural configurations, possibly preventing these “small” language models (SLM) from producing wrong linguistic generalizations. We started from a mild revision of the LM training and evaluation pipeline for Italian including alternative approaches to tokenization based on pseudo-morphological decomposition (§2.2); we then approached a more structurally-driven update of the cell state in LSTM networks, which we will call eMG-RNN variants (§2.3); we finally adopted a precise testing benchmark for specific linguistic contrasts in Italian following BLiMP design [12] (§2.4). We will first set the stage in section (§2) and discuss one alternative tokenization strategy (MorPiece). A simple modification to the gating system in LSTM is proposed that mimics certain linguistic constraints. Then, we will describe the relevant experiments we have run (§3) and draw some conclusions based on the observed results (§4). A general discussion with a description of the next steps will conclude this paper (§5).
Overall, LSTM networks significantly outperform Bidirectional Transformers in this minimal pairs test on Italian. This finding is consistent with results previously discussed in the literature and suggests a clear advantage of recurrent, sequential model architectures (e.g., LSTM) over Bidirectional Transformers in terms of linguistic generalizations [38] and partially justify the renewed interest for RNN networks that we have been observed in the last couple of years [24], [26]. As far as the tokenization procedure is concerned, it is somewhat premature to draw definitive conclusions from our experiments, as MorPiece has not yet been fully optimized or tested. Specifically, the optimal cut-off threshold and minimum branching factor have not been systematically evaluated. Nevertheless, a more morphologically coherent segmentation is expected to enhance sensitivity in certain minimal contrasts. Similarly, the eMG-RNN architecture could be further explored and optimized, particularly considering specific contrasts, which may help determine whether our linguistic modeling is on the right track. Evidence to the contrary is attested by the judgments of sentences with missing thematic roles, which are often incorrectly preferred by most models, including our eMG-RNN. In the end, our results suggest that Loss/Accuracy performance registered in training is not a significant predictor of the performance on the COnVERSA test, or more generally, of the linguistic coherence of the LM trained. Likewise, the models’ dimension is not a clear predictor either: Transformers trained on the same small dataset perform randomly (in all dimensions their performance is round 50%) while eMG-RNN, which has a number of parameters similar to LSTM-2, outperforms both LSTM-2 and LSTM-1 (half size of eMG-RNN). The training size remains a striking difference compared to the input received by children: this difference of one order of magnitude suggests that the bias considered in eMG-RNN are not yet satisfactory and that our Language Acquisition Device is still more efficient; in this sense, the Poverty of Stimulus Hypothesis remains unrefuted [39] by these results. Next steps will consider extending to 10M tokens the training corpus (to match the English counterpart [1]) and further exploring the effects of optimized tokenization procedures or other minimal modifications, and optimizations [24], of recurrent neural networks.
1
Language Models
651_2024
2,024
Ibai Guillén-Pacho, Arianna Longo, Marco Antonio Stranisci, Viviana Patti, Carlos Badenes-Olmedo
The Vulnerable Identities Recognition Corpus (VIRC) for Hate Speech Analysis
ENG
5
2
0
Universidad Politécnica de Madrid, Università di Torino, Aequa-tech
3
1
0
2
Ibai Guillén-Pacho, Carlos Badenes-Olmedo
2
Arianna Longo, Marco Antonio Stranisci
Italy, Spain
Madrid, Turin
This paper presents the Vulnerable Identities Recognition Corpus (VIRC), a novel resource designed to enhance hate speech analysis in Italian and Spanish news headlines. VIRC comprises 880 headlines, manually annotated for vulnerable identities, dangerous discourse, derogatory expressions, and entities. Our experiments reveal that recent large language models (LLMs) struggle with the fine-grained identification of these elements, underscoring the complexity of detecting hate speech. VIRC stands out as the first resource of its kind in these languages, offering a richer annotation scheme compared to existing corpora. The insights derived from VIRC can inform the development of sophisticated detection tools and the creation of policies and regulations to combat hate speech on social media, promoting a safer online environment. Future work will focus on expanding the corpus and refining annotation guidelines to further enhance its comprehensiveness and reliability.
Hate Speech (HS) detection is a task with a high social impact. Developing technologies that are able to recognize these forms of discrimination is not only crucial to enforce existing laws but it also supports important tasks like the moderation of social media contents. However, recognizing HS is challenging. Verbal discrimination takes different forms and involves a number of correlated phenomena that make it difficult to reduce HS to a binary classification. Analyzing the recent history of corpora annotated for HS, it is possible to observe the shift from very broad categorizations of hateful content to increasingly detailed annotation schemes aimed at understanding the complexity of this phenomenon. High-level schemes including dimensions like "hateful/offensiveness" or "sexism/racism" paved the way for more sophisticated attempts to formalize such concepts in different directions: exploring the interaction between HS and vulnerable targets; studying the impact of subjectivity; identifying the triggers of HS in texts. Despite this trend, the complex semantics of HS in texts is far from being fully explored. Information Extraction (IE) approaches to HS annotation have been rarely implemented yet. Therefore, corpora that include fine-grained structured semantic representation of HS incidents are not available. The only notable exception is recent work that treats the identification of HS targets as a span-based task. In order to fill this gap, we present the Vulnerable Identities Recognition Corpus (VIRC): a dataset of 880 Italian and Spanish headlines against migrants aimed at providing an event-centric representation of HS against vulnerable groups. The annotation scheme is built on four elements: Named Entity Recognition (NER): All the named entities involved in an HS expression: "location," "organization," and "person." Vulnerable Identity mentions: Generic mentions related to identities targeted by HS as defined by international regulatory frameworks: "women," "LGBTQI," "ethnic minority," and "migrant." Derogatory mentions: All mentions that negatively portray people belonging to vulnerable groups. Dangerous speech: The part of the message that is perceived as hateful against named entities or vulnerable identities. In this paper, we present a preliminary annotation experiment intended to validate the scheme and to assess the impact on disagreement in such a fine-grained task. The paper is structured as follows. In the related work section, we discuss related work; in the methodology section, we describe the methodology used; in the VIRC corpus section, we introduce the VIRC corpus; and in the conclusion, we present conclusions and discuss possible future work.
The Vulnerable Identities Recognition Corpus (VIRC), created in this work, reveals the challenge of identifying vulnerable identities due to the rapid evolution of language on social media. Our experiments indicate that large language models (LLMs) struggle significantly with this task. VIRC provides a detailed and structured resource that enhances understanding of the extensive use of hate speech in Italian and Spanish news headlines. The corpus is particularly valuable as it includes more annotation dimensions compared to related studies in other languages, such as vulnerable identities, dangerous discourse, derogatory expressions, and entities. This differentiation between vulnerable identities and entities, as well as between dangerous and derogatory elements, enables the development of sophisticated detection tools that can facilitate large-scale actions to mitigate the impact of hate speech (e.g., moderation of messages and generation of counter-narratives that reduce the damage to the mental health of victims). Future work will focus on expanding this resource by doubling the size of annotations for both languages and including non-racism-related phrases to ensure the resource is comprehensive. Additionally, we plan to refine the annotation guidelines to avoid low agreement on the derogatory label, enhancing the overall reliability and utility of the corpus. These efforts will further improve the effectiveness of hate speech detection and contribute to the development of policies and tools for a safer online environment.
6
Sentiment, Emotion, Irony, Hate
652_2024
2,024
Tommaso Bonomo, Simone Conia, Roberto Navigli
Exploring the Dissociated Nucleus Phenomenon in Semantic Role Labeling
ENG
3
0
0
Sapienza Università di Roma
1
0
0
0
0
0
0
Italy
Rome
Dependency-based Semantic Role Labeling (SRL) is bound to dependency parsing, as the arguments of a predicate are identified through the token that heads the dependency relation subtree of the argument span. However, dependency-based SRL corpora are susceptible to the dissociated nucleus problem: when a subclause's semantic and structural cores are two separate words, the dependency tree chooses the structural token as the head of the subtree, coercing the SRL annotation into making the same choice. This leads to undesirable consequences: when directly using the output of a dependency-based SRL method in downstream tasks it is useful to work with the token representing the semantic core of a subclause, not the structural core. In this paper, we carry out a linguistically-driven investigation on the dissociated nucleus problem in dependency-based SRL and propose a novel algorithm that aligns predicate-argument structures to the syntactic structures from Universal Dependencies to select the semantic core of an argument. Our analysis shows that dissociated nuclei appear more often than one might expect, and that our novel algorithm greatly increases the richness of the semantic information in dependency-based SRL. We release the software to reproduce our experiments at .
Within the field of Natural Language Processing, Semantic Role Labeling \citep[SRL]{gildea2002} is aimed at recognizing the semantic information conveyed by a sentence, more specifically identifying who did what to whom, when, where and how AUTHOR. Over the years, SRL has split into two main annotation formalisms, namely, span-based and dependency-based. The key difference between the two lies in how they identify the roles of a predicate: span-based SRL directly extracts a span of the input text as the argument of a predicate, whilst dependency-based SRL identifies the word that heads the syntactic dependency relation subtree corresponding to the argument span as the argument. Using dependency-based SRL can be beneficial in real-world settings, as i) dependency-based SRL parsers have achieved better results on standard benchmarks, and ii) the identified token can be directly utilized in several downstream tasks, including Coreference Resolution AUTHOR, Opinion Role Labeling AUTHOR, Argument Mining AUTHOR, and Concept Map Mining AUTHOR, among others. However, the use of role tokens in the above tasks requires them to carry the ``semantic meaning'' of the role. This requirement is often not fulfilled when examining both the output of state-of-the-art dependency-based SRL systems and the corpora they were trained on, such as CoNLL-2009 AUTHOR. In these annotations, it is not uncommon to have an adpositional clause serving as the head word of a semantic role, even though adpositions do not represent the semantic core of that role. In linguistics, this phenomenon is referred to as an instance of dissociated nucleus \citep[ch.~23]{tesniere-etal-2015-elements}. Although this term encompasses many different syntactical constructions, here we focus on adpositional clauses present in the CoNLL-2009 dataset, across all of its languages. In this paper, we carry out a concise, linguistically-driven investigation on dissociated nuclei in dependency-based SRL, uncovering the extent of this problem and how it affects the semantic aspect of this task. In addtion, we introduce SemDepAlign, a simple yet effective algorithm capable of mitigating this phenomenon significantly by aligning predicate-argument structures in SRL with syntactic parses from the Universal Dependencies project, which addresses the dissociated nucleus phenomenon directly in the dependency structures. Applying SemDepAlign to CoNLL-2009 results in a substantial increase in the semantic variety of role tokens, measured through a set of proxy metrics. Finally, we provide a glimpse at how addressing dissociated nuclei simplifies the alignment between Semantic Role Labeling and Semantic Parsing, specifically with Abstract Meaning Representation~\citep[AMR]{banarescu-etal-2013-abstract}. We release SemDepAlign and Aligned-CoNLL09 -- the result of applying SemDepAlign to CoNLL-2009 -- in the hope that our work can encourage a deeper focus on semantics in SRL and foster future integration of this task into downstream applications.
In this paper, we conducted an in-depth investigation on the dissociated nucleus issue in dependency-based SRL. We introduced SemDepAlign, a novel method to align predicate-argument structures in SRL with syntactic parses from the Universal Dependencies project, which addresses the dissociated nucleus phenomenon. Our analyses and experiments in SRL modeling demonstrate that our approach to dissociated nuclei brings more semantic richness whilst remaining competitive on standard benchmarks.
4
Syntax and Dependency Treebanks
653_2024
2,024
Teresa Paccosi, Sara Tonelli
Benchmarking the Semantics of Taste: Towards the Automatic Extraction of Gustatory Language
ENG
2
2
1
Fondazione Bruno Kessler, Università di Trento, DHLab / KNAW Humanities Cluster
3
1
0
1
Teresa Paccosi
0
0
Italy, Netherlands
Trento, Amsterdam
In this paper, we present a benchmark containing texts manually annotated with gustatory semantic information. We employ a FrameNet-like approach previously tested to address olfactory language, which we adapt to capture gustatory events. We then propose an exploration of the data in the benchmark to show the possible insights brought by this type of approach, addressing the investigation of emotional valence in text genres. Eventually, we present a supervised system trained with the taste benchmark for the extraction of gustatory information from historical and contemporary texts.
Despite the central role of nutrition in our lives, taste has been often classified as an inferior sense in the Western philosophical tradition. This downplayed role is reflected in the vocabulary used to describe the gustatory experience, which, together with smell, is characterized by a scarcity of domain-specific terms AUTHOR. The difficulty in capturing the semantics of taste could help explain why there are few works in the fields of Natural Language Processing (NLP) and Digital Humanities (DH) that deal with this sense and, in particular, the language used to describe its experience. While there has been renewed interest in the automatic extraction of nutrients and ingredients from texts for health and medicinal purpose AUTHOR, less attention has been devoted to the development of tools and models focused on capturing the semantics of sensory experiences, especially in a diachronic fashion. In this paper, we present an English benchmark for the study of gustatory language and a supervised system for the automatic extraction of taste-related events in English, which we trained using this benchmark. The benchmark was built to be a counterpart to the olfactory one presented in AUTHOR, with the idea of making the study of the language of these two senses comparable. The system is designed as a means to study the language used to describe the experience of tasting from both synchronic and diachronic perspectives. The selected formal representation for the semantics of taste is based on Frame Semantics AUTHOR, and the system is trained to identify the lexical units and the possible semantic roles contributing to the construction of a gustatory event. We present the results of the experiments and an exploration of the benchmark data, aiming to demonstrate the potential of frame-based analysis for sensory studies.
In this paper, we presented a benchmark for gustatory events containing manually annotated taste-related information, built as a counterpart to the one proposed in AUTHOR. The benchmark is constructed with the same approach adopting a frame-based methodological framework to analyze sensory language. We emphasized the importance of frame-based analysis to capture sensory events by exploring the characterization of positive and negative valence in the benchmarks through the analysis of taste and smell words and sources. The analysis based on frames seems to bring relevant insights into capturing sensory valence from different perspectives, likely supporting the suitability of this approach to deal with humanistic inquiries. We then presented a supervised system to automatically extract taste-related frames, trained on this benchmark. This preliminary exploration and the results obtained with our experiments seem promising for future exploration with automatically extracted data. Indeed, the limited data of the benchmark are not enough to draw relevant conclusions, and for this reason we plan to use our system to extract more data and conduct large-scale analyses of the evolution of sensory information over time. The limited number of documents is likely a contributing factor to the significant discrepancies in accuracy among the different frame elements, necessitating more instances to enable a good generalization. Future steps should involve increasing the number of documents and providing less sparse annotations, aiming for better temporal balance. The focus should be on annotating frame elements with lower scores and fewer instances in the benchmark, such as Taste\_Carrier and Location. Additionally, alternative metrics and techniques should be employed to capture and explain performance variations across different models. As a further comparison, we plan also to assess the performance of general-purpose frame semantic parsers like LOME AUTHOR on our benchmark.
6
Sentiment, Emotion, Irony, Hate
654_2024
2,024
Manuela Sanguinetti, Alessandro Pani, Alessandra Perniciano, Luca Zedda, Andrea Loddo, Maurizio Atzori
Assessing Italian Large Language Models on Energy Feedback Generation: A Human Evaluation Study
ENG
6
2
1
Università di Cagliari
1
0
0
0
0
0
0
Italy
Cagliari
This work presents a comparison of some recently-released instruction-tuned large language models for the Italian language, focusing in particular on their effectiveness in a specific application scenario, i.e., that of delivering energy feedback. This work is part of a larger project aimed at developing a conversational interface for users of a renewable energy community, where clarity and accuracy of the provided feedback are important for proper energy management. This comparison is based on the human evaluation of the output produced by such models using energy data as input. Specifically, the data pertains to information regarding the power flows within a household equipped with a photovoltaic (PV) plant and a battery storage system. The goal of the feedback is precisely that of providing the user with such information in a meaningful way based on the specific aspect they intend to monitor at a given moment (e.g., self-consumption levels, the power generated by the PV panels or imported from the main grid, or the battery state of charge). This evaluation experiment has the two-fold purpose of providing an exploratory analysis of the models' abilities on this specific generation task solely relying on the information and instruction provided in the prompt and as an initial investigation into their potential as reliable tools for generating user-friendly energy feedback in this intended scenario.
The provision of energy feedback plays a crucial role in promoting energy efficiency among users. The expression energy feedback (or eco-feedback) covers a wide range of energy-related information. This can include detailed reports on energy usage and production (in the case of renewable energy sources), as well as energy-saving advice, whether generic or user-specific. The primary goal of energy feedback is to allow users to make informed decisions regarding their energy management, thus promoting better conservation practices. A substantial body of literature within the field of Human-Computer Interaction (HCI) has explored various energy feedback mechanisms, primarily focusing on visual or ambient feedback as well as gamification techniques (we refer to the surveys proposed by \citet{Albertarelli-etal-2018} and \citet{Chalal-etal-2022} for further details on these aspects). However, a greater interest has been reported on the delivery of energy feedback through conversational agents AUTHOR. Furthermore, within the field of Natural Language Generation (NLG), several studies prior to the advent of Large Language Models (LLMs) investigated the use of NLG architectures to communicate consumption data. Notable works include those by \citet{Trivino-Sanchez-2015} and \citet{Conde-Clemente-etal-2018}, which used fuzzy sets to tackle data-to-text generation tasks, also tailoring the linguistic description on given consumption profiles. Similarly, \citet{martinez-municio_linguistic_2018} employed fuzzy sets to produce linguistic summaries based on the consumption of specific buildings or groups of buildings, using time series data as input. This work is part of a research project aimed at developing a modular task-oriented conversational agent to inform users about their energy consumption and photovoltaic (PV) production and, more generally, to support better management of their energy resources through text-based energy feedback. The conversational agent will then be deployed and tested within a renewable energy community in Italy, which motivates our specific focus on Italian as the primary language for the interactions. At this stage of the project, we plan to integrate the generative abilities of LLMs into the conversational pipeline. This approach is expected to deliver more varied and dynamic responses instead of predefined, static templates, possibly making the user experience enjoyable. This study was driven by the need to obtain more quantitative insights into the expected performance of such models when tasked with the delivery of energy feedback based on actual energy data. The main objective of this study thus aims to verify how effectively instruction-tuned LLMs currently available for the Italian language can deliver clear and accurate feedback based on energy data provided within a prompt, without relying on more elaborate techniques like fine-tuning or Retrieval Augmented Generation. More specifically, we formulated the following research questions: \item Are the LLMs under study able to produce energy feedback that is 1) informative, 2) comprehensible, and 3) accurate with respect to the provided energy data? \item Are there any major differences among such models with respect to these capabilities? To answer these questions, we conducted an exploratory analysis by manually evaluating some of these Italian LLMs, organizing the study around criteria designed to quantify these specific aspects. This work closely aligns with a recent initiative that has been launched within the Italian NLP community, i.e., CALAMITA}, a campaign aimed at evaluating the capabilities of Italian (or multilingual, but including Italian) LLMs on specific tasks in zero or few-shot settings. Unlike the latter, however, our study relies solely on human judgments rather than automatic metrics. The main challenges of a manual approach include the absence of standardized practices and evaluation criteria AUTHOR, as well as the lack of systematic documentation AUTHOR, which hinders the reproducibility of such studies.} In light of these challenges, the intended contributions of this paper are outlined below: \item A small-scale human evaluation of several Italian LLMs on a specific task. \item The description of a protocol for human evaluation inspired by the good practices recommended in recent literature AUTHOR. To this end, we also make available the evaluation dataset, with the ratings assigned by the evaluators in a non-aggregated form. The remainder of this paper describes how this study was designed and carried out, with a discussion of the results obtained and the main limitations of the work.
This study provides an initial assessment of several Italian language models' ability to generate effective energy feedback. %The evaluation focused on three key dimensions—informativeness, comprehensibility, and accuracy—organized into five evaluation criteria. The results indicate that while the models generally perform well, particularly in terms of comprehensibility and accuracy, there is greater variability regarding informativeness. Among the tested models, results show that, except for LLaMAntino2-7B-UltraChat, the remaining ones provide comparable performances. However, it is important to highlight the limitations of this study. First, this is a small-scale study, as it involves a limited number of models and evaluators. Concerning the former issue, we also point out that the study was restricted to models available on HuggingFace, excluding potentially relevant models from external sources, such as Fauno} and Camoscio AUTHOR. A more systematic study should consider these models as well, in order to provide a more comprehensive evaluation over the Italian LLMs' landscape. As for the pool of evaluators, it is important to note a significant bias in both their personal backgrounds and demographics. All the judges have a background in computer science and varying degrees of familiarity with the topics at hand. Furthermore, there is a gender imbalance (1 female and 3 male judges) and a lack of age diversity, as all four judges fall within the 24–30 age range. In light of these considerations, a more systematic comparison as the one envisioned above would benefit from a broader and more diverse pool of evaluators. This would not only increase the reliability of the comparison but also provide a deeper understanding of potential correlations between socio-demographic factors, prior knowledge of technology and energy-related concepts, and the differing perceptions of the evaluation criteria considered in our study. Common approaches to address the lack of human participants include the use of crowdsourcing platforms, with a careful design of participation criteria that would ensure a better gender and demographic balance. Alternatively, a user study involving prospective users of the conversational agent could be conducted; this would ultimately enable to gather valuable insights on the type of feedback expected by the target audience. Finally, an extended evaluation framework should also include an analysis of the statistical power of the sample size to ensure more robust conclusions. Despite these limitations, this work offers a preliminary overview and aims to pave the way for future research on this aspect, also stressing the importance of more standardized human evaluation practices. As a matter of fact, the evaluation protocol we designed draws heavily from methodologies recommended in more general literature pertaining to human evaluation within generation and summarization tasks. Our approach thus aims to ensure that the core principles of the experiment are flexible enough to be easily replicated or adapted for a wider range of different domains. This work has been developed within the framework of the project e.INS- Ecosystem of Innovation for Next Generation Sardinia (cod. ECS 00000038) funded by the Italian Ministry for Research and Education (MUR) under the National Recovery and Resilience Plan (NRRP) - MISSION 4 COMPONENT 2, "From research to business" INVESTMENT 1.5, "Creation and strengthening of Ecosystems of innovation" and construction of "Territorial R\&D Leaders". This work was also partially funded under the National Recovery and Resilience Plan (NRRP) - Mission 4 Component 2 Investment 1.3, Project code PE0000021, “Network 4 Energy Sustainable Transition--NEST”. \appendix \newpage
1
Language Models
655_2024
2,024
Arianna Redaelli, Rachele Sprugnoli
Is Sentence Splitting a Solved Task? Experiments to the Intersection Between NLP and Italian Linguistics
ENG
2
2
1
Università di Parma
1
0
0
0
0
0
0
Italy
Parma
Sentence splitting, that is the segmentation of the raw input text into sentences, is a fundamental step in text processing. Although it is considered a solved task for texts such as news articles and Wikipedia pages, the performance of systems can vary greatly depending on the text genre. This paper presents the evaluation of the performance of eight sentence splitting tools adopting different approaches (rule-based, supervised, semi-supervised, and unsupervised learning) on Italian 19th-century novels, a genre that has not received sufficient attention so far but which can be an interesting common ground between Natural Language Processing and Digital Humanities.
Sentence splitting is the process of segmenting a text into sentences. A sentence ends with a strong punctuation mark (e.g., full stop, question mark, or exclamation point) and is typically followed by a capital letter. The definition of sentence adopted here, which like any definition is inherently problematic, is motivated by the specific requirements of the present work, as will be seen below. by detecting their boundaries, which, at least for Western languages, including Italian, usually correspond to certain punctuation marks AUTHOR. This means that sentence splitting, for many languages, is a matter of punctuation disambiguation, that is, recognizing when a punctuation mark signals a sentence boundary or not. The importance of sentence splitting is often underestimated because it is considered an easy task, but its quality has a strong impact on the quality of subsequent text processing because errors can propagate reducing the performance of downstream tasks such as Syntactic Analysis AUTHOR, Machine Translation AUTHOR and Automatic Summarization AUTHOR. The most popular pipeline models, such as those of Stanza AUTHOR and spaCy}, have mostly been trained and evaluated on fairly formal texts, such as news articles and Wikipedia pages, so the publicly reported performances tend to be high, i.e. above 0.90 in terms of F1. However, the text genre has a significant impact on the results. For example, in the CoNLL 2018 shared task ``Multilingual Parsing from Raw Text to Universal Dependencies'', the best system on the Italian ISDT treebank AUTHOR achieved a F1 of 0.99, while on the PoSTWITA treebank, made of tweets AUTHOR, the highest result was 0.66. Given these variations, considering less formal text genres could provide valuable insights into the challenges of sentence splitting. Among these genres are literary texts, which present unique and peculiar stylistic and creative features that can break traditional grammatical norms, including punctuation ones AUTHOR. These features depend on both authorial choices and the cultural context of the time. As a matter of facts, punctuation can vary significantly depending on the historical period; literary texts may follow prevailing trends or oppose them, giving rise to new trends. This phenomenon is particularly evident in 19th century, when the Italian usus punctandi began shifting from a primarily syntactic usage, prescribed by grammar books, to a communicative-textual usage of punctuation marks AUTHOR. Since this shift was probably influenced by the reflections and the practical uses of prominent authors such as Alessandro Manzoni AUTHOR, our study focuses on his historical novel, ``I Promessi Sposi''. The author paid meticulous attention to the punctuation of the text, revising it up to the final print proofs, and made specific and personal choices in collaboration with the publisher, alongside more classical ones AUTHOR. Although not always consistent, Manzoni's decisions make the novel particularly complex and interesting from a punctuation perspective. Furthermore, ``I Promessi Sposi'' has been a fundamental reference for the development of a common written Italian language: starting from this assumption, many of the author's punctuation choices have been adopted by later grammars for rule-making, though only some of them have become part of the standard. Given that punctuation was still undergoing standardization at the time, and that its use can depend not only on the conventions of the period but also on the writer’s personal style, the type of content being addressed (and how it is presented), and even the influence of typography during the printing process, we also decided to broaden our study to include sections from other novels contemporary to Manzoni’s (1840-42). Specifically, we analyzed "I Malavoglia" (1881) by Giovanni Verga, "Le avventure di Pinocchio. Storia di un burattino" (1883) by Carlo Collodi, and "Cuore" (1886) by Edmondo de Amicis. In this paper, our main contributions are as follows: (i) we provide an estimate of the performance of eight sentence splitting tools adopting different approaches on a specific and challenging text genre, namely historical literary fiction texts, which has not received enough attention so far; (ii) we compare the results considering the point of view of humanities scholars (in particular Italian linguistics) as the main stakeholders in the considered domain, in order to establish a flourishing cross-fertilization between NLP and Digital Humanities; (iii) we release manually split data for four 19th-century Italian novels and a shared notebook where to run many of the tested systems.
This paper presents an assessment of the performance of eight sentence splitting tools adopting different approaches on four 19th-century novels: "I Promessi Sposi" by Alessandro Manzoni, "I Malavoglia" by Giovanni Verga", "Le avventure di Pinocchio" by Carlo Collodi, and "Cuore" by Edmondo de Amicis. Although these texts belong to the same historical period, they show specific features depending on the form and content of the novel as well as the author's stylistic choices. Among these features is punctuation, which in the late 19th century had not reached a detectable stability yet and was rather experiencing a paradigmatic change. Since sentence splitting for Western languages, including Italian, relies heavily on punctuation disambiguation, applying existing tools to the four novels considered has resulted in performances well below the standards. These texts demonstrate that sentence splitting is not a completely solved task. On the other hand, applying the model retrained on “I Promessi Sposi” to the other three novels showed significant improvements for “Le avventure di Pinocchio” and “I Malavoglia”, and a moderate improvement for “Cuore.” This result suggests that shared historical context and belonging to the same textual genre may offer sufficient similarities to improve the model's performance. However, the example of "Cuore" is evidence of how this is sometimes not enough: some specific features in form, punctuation and style continue to affect sentence splitting, demonstrating that although retraining may mitigate some problems, it does not completely overcome the inherent variability of these texts. Philologists have increasingly focused on preserving the original punctuation as a part of the author's creation of the text, providing valuable and reliable supports of study for scholars of linguistics and the history of the Italian language. Their combined knowledge is precious for achieving accurate sentence splitting in these texts. Thus, sentence splitting can be an interesting common ground between different disciplines, potentially leading to the development of tools for the automatic analysis of historical literary texts. This field remains under-explored in the Italian context, offering significant opportunities for further study and cross-disciplinary collaboration. Questa pubblicazione è stata realizzata da ricercatrice con contratto di ricerca cofinanziato dall’Unione europea - PON Ricerca e Innovazione 2014-2020 ai sensi dell’art. 24, comma 3, lett. a, della Legge 30 dicembre 2010, n. 240 e s.m.i. e del D.M. 10 agosto 2021 n. 1062.
4
Syntax and Dependency Treebanks
656_2024
2,024
Rachele Sprugnoli, Arianna Redaelli
Annotation and Detection of Emotion Polarity in I Promessi Sposi: Dataset and Experiments
ENG
2
2
1
Università di Parma
1
0
0
0
0
0
0
Italy
Parma
Emotions play a crucial role in literature and are studied by various disciplines, e.g. literary criticism, psychology, anthropology and, more recently, also with computational methods in NLP. However, studies in the Italian context are still limited. This work therefore aims to advance the state of the art in the field of emotion analysis applied to historical texts by proposing a new dataset and describing the results of a set of emotion polarity detection experiments. The text analyzed is ``I Promessi Sposi'' in its final edition (published in 1840), one of the most important novels in the Italian literary and linguistic canon.
Emotions play a key role in literature, representing a bridge between the author's purposes, the text, and the reader's personal background: literature collects experiences and contains the emotions that accompany them, in turn generating new experiences and new emotions. Therefore, studying emotions in literary texts implies the possibility of providing valuable insights into the deeper meanings and intentions behind a work, the form it may take, and the readers' engagement with it. This field of study has recently experienced a flourishing national and international development involving different disciplines, from literary criticism to philosophy, from anthropology to psychology. For example, in the Italian context, Ginzburg et al. AUTHOR analyzed how Matte Blanco's psychoanalytic theories on emotions are applied to literary criticism, taking into account authors like Tozzi, Pirandello, and Svevo, while Guaragnella AUTHOR explored the complex interaction between humor and sadness in 20th-century Italian literature from a both philosophical and literary point of view. However, some literary works remained under-explored. One such work is Alessandro Manzoni's ``I Promessi Sposi''. Despite its emotional richness, the novel has often been regarded as monolithic and static, both because of the narrated events, strongly influenced by the author's religious spirit and social and political polemic, and because it quickly became a model of the Italian language, stably included in school curricula as mandatory study material. This has led to a certain degree of reluctance and lack of enthusiasm among the readers. As a consequence, a study of emotions in ``I Promessi Sposi'' can be beneficial from both an academic and educational standpoint. Academically, it can provide new insights into a classic text, encouraging new interpretations and scholarly discussions. For didactic purposes, analyzing the emotions in ``I Promessi Sposi'' can make the novel more relatable and appealing for students, revealing the depth and complexity of the characters' experiences in the context in which they live, and encouraging a closer connection with them and with Manzoni's social issues. Given this context, computational methods, already widely applied especially on user-generated contents (such as reviews and social media posts), can be profitably tested on the fictional text after developing specific datasets for training and evaluating new models. The present work takes as a basis a preliminary annotation of the Manzoni's novel, expanding the number of manually labeled sentences and proposing the development of some models of varying complexity. More specifically, two are the main contributions of our work: i) we release} a new dataset made of more than 3.000 sentences taken from ``I Promessi Sposi'' manually annotated with four emotion polarity classes (i.e. POSITIVE, NEGATIVE, NEUTRAL, MIXED); ii) we test various approaches for emotion polarity detection using the new dataset as-is but also augmenting it with other annotated Italian resources.
This paper presents a new manually annotated dataset and a set of experiments for the automatic detection of emotion polarity. More specifically, the dataset contains 3,095 sentences taken from ``I Promessi Sposi'' and the experiments cover different approaches, namely lexicon-based, SVC and the fine-tuning of an Italian BERT model and of the multilingual XLM-RoBERTa model. The impact of the training set size is also evaluated by increasing the in-domain dataset by combining other annotated Italian resources. We are aware that for the emotion analysis task, as for all NLP tasks, Large Language Models are now widely used AUTHOR but these require computational powers currently not available to the authors of the paper. In the future, our work will focus on this aspect in order to be in line with the current state of the art. Another future work will concern the annotation of emotions with more granular labels, extending an activity already started on Chapter VIII only, on which the label scheme proposed for the GoEmotions dataset AUTHOR was applied AUTHOR. Additionally, we plan to pay greater attention to the annotation of irony, a crucial aspect of the novel. This could be incorporated into the dataset using a binary 0/1 value to indicate its presence or absence, as we have already begun to implement }. Finally, we would like to explore the applications of our work in the school context. Concerning the study of emotions in Manzoni's novel, computational methods and tools could provide inputs and data useful for didactic practical activities, such as visual representations of affective scenes, role-playing exercises, or even crowd-sourced annotation that allows students to express their personal interpretations of the characters' emotions in different chapters and situations. Activities like these can make the whole learning experience more dynamic and captivating, promoting a deeper connection between the students and the novel and, meanwhile, improving their critical thinking and empathy. The authors thank Giovanni Moretti for the help given with the fine-tuning scripts. \noindent Questa pubblicazione è stata realizzata da ricercatrice con contratto di ricerca cofinanziato dall’Unione europea - PON Ricerca e Innovazione 2014-2020 ai sensi dell’art. 24, comma 3, lett. a, della Legge 30 dicembre 2010, n. 240 e s.m.i. e del D.M. 10 agosto 2021 n. 1062.
6
Sentiment, Emotion, Irony, Hate
657_2024
2,024
Irene De Felice, Francesca Strik-Lievers
Building a pragmatically annotated diachronic corpus: the DIADIta project
ENG
2
2
1
Università del Piemonte Orientale, Università di Genova
2
0
0
0
0
0
0
Italy
Vercelli, Genova
We present here the first stages of the construction of the DIADIta corpus, a diachronic corpus of Italian annotated for interactional pragmatic phenomena. This corpus aims to fill a gap in the resources available for the historical pragmatics of Italian. First, we describe the annotation scheme, which is structured into four levels covering a wide range of pragmatic (or pragmatically relevant) categories: speech acts (e.g., apology; threat), forms (e.g., discourse marker; expressive), pragmatic functions (which are speaker-oriented, e.g., mitigation; turn-taking), and pragmatic aims (which are interlocutor-oriented, e.g., attention-getting; request for agreement). We then discuss how the results of an initial annotation exercise provide insights for refining the annotation procedure.
The DIADIta project1, situated within the framework of historical pragmatics [1], aims to investigate the specific pragmatic features and strategies of dialogic interaction in different phases of the Italian language, and to understand how these features and strategies interrelate with one another and change over time. Although the last fifteen years have witnessed a growing interest in the historical pragmatics of Italian [2], there is still a lack of an in-depth study on this topic, one that is able to fully account for how different communicative strategies and different linguistic categories (primarily, but not exclusively, pragmatic) interact with each other, both in synchronic and diachronic perspective. The DIADIta project aims to address this gap. A key goal of the project is to build a diachronic corpus annotated for a wide range of pragmatically relevant linguistic phenomena. The DIADIta corpus, which will contribute to the recently established field of diachronic corpus pragmatics [3], will consist of at least 24 Italian literary texts of different genres dating from the 13th to the 20th century: in most cases, plays, novels and short stories where dialogic interactions between characters are particularly frequent. Once completed, the corpus will be freely accessible and searchable from the project website (www.diadita.it) and will be possibly further expanded and enriched with other texts of different literary genres. In this paper, we present the first steps we have taken to lay the foundation for the DIADIta corpus. After a brief review of related literature and resources (Section 2), we describe the structure of the annotation scheme, outlining the theoretical and methodological assumptions that underlie it and highlighting its most innovative aspects (Section 3). Then, we present the results of an annotation exercise on a play by Luigi Pirandello, with which we tested the reliability of the scheme. In the light of these results, we also briefly discuss some improvements that we plan to apply in the next stages of the corpus annotation process (Section 4). The last section draws the conclusions of the study (Section 5).
This paper has outlined the initial steps in creating the DIADIta corpus, a pragmatically annotated diachronic corpus for Italian. This corpus is characterized by its rich, multi-layered annotation scheme organized into four dimensions: forms, pragmatic functions, pragmatic aims, speech acts. This structure allows for nuanced analysis of pragmatic strategies in literary texts from the 13th to the 20th century. The innovative approach of annotating complex interactional features highlights the value of this corpus as an unparalleled tool for examining the evolution of pragmatic functions and forms over time, enabling detailed and multi-dimensional analysis of text data. We have also detailed an annotation exercise on a play by Pirandello that illustrates the task’s complexity (reflected in the low level of agreement in some layers), but also the richness of the annotations. This first exercise is crucial for refining the annotation process and improving clarity and reliability in applying a pragmatic annotation model to historical texts.
13
Multimodal
658_2024
2,024
Mauro Bruno, Elena Catanese, Francesco Ortame
Towards a Hate Speech Index with Attention-based LSTMs and XLM-RoBERTa
ENG
3
1
0
Istituto nazionale di statistica (Istat)
1
0
0
0
0
0
0
Italy
Rome
The diffusion of hate speech on social media requires robust detection mechanisms to measure its harmful impact. However, detecting hate speech, particularly in the complex linguistic environments of social media, presents significant challenges due to slang, sarcasm, and neologisms. State-of-the-art methods like Large Language Models (LLMs) demonstrate strong contextual understanding, but they often require prohibitive computational resources. To address this, we propose two solutions: (1) a bidirectional long short-term memory network with an attention mechanism (AT-BiLSTM) to enhance the model's interpretability and natural language understanding, and (2) fine-tuned multilingual robustly optimized BERT (XLM-RoBERTa) models.\\ Building on the promising results from EVALITA campaigns in hate speech detection, we develop robust classifiers to analyse 20.4 million Tweets related to migrants and ethnic minorities. Further, we utilise an additional custom labeled dataset (IstatHate) for benchmarking and training and we show how its inclusion can improve classification performance. Our best model outperforms top entries from previous EVALITA campaigns. Finally, we introduce Hate Speech Indices (HSI), which capture the dynamics of hate speech over time, and assess whether their main peaks correlate with major events.
Social media platforms provide a fertile ground for the dissemination of hate speech, particularly targeting vulnerable groups such as migrants and ethnic minorities. In the last decade, hateful speech on platforms like X has become a pressing issue, as it not only affects the individuals who are directly targeted, but also contributes to a climate of hostility and division. Detecting hate speech in social media content is crucial to analyse the safety and inclusivity of online platforms and social environments. Hate speech detection is inherently challenging due to the subtle and evolving nature of social media language. Tweets often contain slang, neologisms, and sarcasm, which complicates the identification process. Traditional text classification methods usually fall short in addressing these challenges, especially for non-English languages where extensively labeled training sets are not easy to gather, calling for the development of more sophisticated approaches. The topic of hate speech detection in Italian texts has gained significant attention within the natural language processing (NLP) community, as shown by the HaSpeeDe (Hate Speech Detection) tasks at EVALITA. For instance, the EVALITA 2018 AUTHOR, and 2020 AUTHOR campaigns have provided labeled datasets and attracted several submissions employing a diverse set of machine learning and deep learning techniques. A prominent approach in recent hate speech detection and, in general, text classification, is the use of pre-trained language models like Bidirectional Encoder Representations from Transformers (BERT) AUTHOR. After their first appearance in 2018, BERT-based models have set new standards in several NLP tasks thanks to their ability to capture contextual information effectively, especially when fine-tuned on the specific task of interest. In 2019, a multilingual robustly optimized BERT (XLM-RoBERTa) AUTHOR was published, making it possible to obtain higher performances on non-English texts. For instance, TheNorth team for the HaSpeeDe 2 task at EVALITA 2020 obtained the best results fine-tuning a XLM-RoBERTa model AUTHOR. It is also worth noting that in recent years, generative Large Language Models (LLMs) have demonstrated an even more impressive ability to understand natural language. However, their large number of parameters makes them impractical for classifying large volumes of data, even when compared to the larger version of XLM-RoBERTa\footnote{The number of parameters in Large Language Models ranges between a few billions to hundreds of billions of parameters, while the large version of XLM-RoBERTa “only” has 561 million parameters. . Given these developments and challenges, our research proposes two approaches to hate speech classification: (1) an attention-based bidirectional long short-term memory network (AT-BiLSTM), benchmarked against a standard BiLSTM model, and (2) a fine-tuned XLM-RoBERTa (large) model, benchmarked against its base, smaller version. We use two labeled training sets: (a) the EVALITA 2020 HaSpeeDe 2 task dataset, and (b) a custom, smaller labeled dataset, which we refer to as IstatHate. Our study explores the impact of training models on both the EVALITA dataset alone and a combined dataset that includes EVALITA and IstatHate, evaluating their performance across multiple test sets. Finally, we present a preliminary version of the Hate Speech Index (HSI), designed to quantify the proportion of hate speech by classifying 20.4 million Italian Tweets related to migrants and ethnic minorities from January 2018 to February 2023.
This study addressed the issue of hate speech detection on social media, specifically focusing on X (formerly Twitter) and on migrants and ethnic minorities. Given the complexities of natural language on these platforms, we explored different approaches including lighter bidirectional LSTM models with and without attention mechanisms, and fine-tuned XLM-RoBERTa models both in their base and large formats. We trained our models on EVALITA 2020 HaSpeeDe 2 data and also introduced a small labeled dataset, IstatHate, that improves the performance of the already best performing model, XLM-RoBERTa-large, when included in the training set. Despite longer inference times and higher computational resources required for large amounts of data, heavier models like XLM-RoBERTa-large achieve significantly higher performance and generalization capabilities. Yet, AT-BiLSTM\star (i.e., the AT-BiLSTM model that includes both EVALITA and IstatHate data in the training), outperforms XLM-RoBERTa-base\star across all test sets, a notable achievement considering the difference in models size and inference time. We compared the predictions of AT-BiLSTM-EV against AT-BiLSTM\star visualising the attention scores they assigned to the same Tweets. Empirical evidence shows that including IstatHate in the training set may improve contextual understanding and mitigate the bias that simpler models like LSTMs may have when classifying hate speech in the presence of curse words. The preliminary computation of the Hate Speech Index (HSI) reveals significantly different levels of hate speech detection across different models and training sets, even though the training data has very similar characteristics. Fine-tuned XLM-RoBERTa models produce the lower estimates in levels, especially when IstatHate is included in the training set. Furthermore, when analysing hate peaks, XLM-RoBERTa-large\star predictions highly correlate with major events. Future work will focus on expanding and validating the IstatHate dataset, exploiting the sampling weights, refining model architectures, and exploring additional features to enhance detection capabilities. \newpage \appendix
6
Sentiment, Emotion, Irony, Hate
659_2024
2,024
Moritz Kronberger, Viviana Ventura
THAVQA: a German task-oriented VQA dataset annotated with human visual attention
ENG
2
1
0
Technische Hochschule Augsburg
1
1
1
2
Moritz Kronberger, Viviana Ventura
0
0
Germany
Augsburg
Video question answering (VQA) is a challenging task that requires models to generate answers by using both information from text and video. We present Task-oriented Human Attention Video Question Answering (THAVQA), a new VQA dataset consisting of third- and first- person videos of an instructor using a sewing machine. The sewing task is formalized step-by-step in a script: each step consists of a video annotated with German language open-ended question and answer (QA) pairs and with human visual attention. %This data collection is the first step toward a larger goal: to have an AI assistant help the user solve the task by answering questions about how to properly use the sewing machine. The paper also includes a first assessment of the performance of a pre-trained Multimodal Large Language Model (MLLM) in generating answers to the questions of our dataset across different experimental settings. Results show that our task-oriented dataset is challenging for pre-trained models. Specifically, the model struggles to answer questions requiring technical knowledge or spatio-temporal reasoning. %%Model performance improves when it has as input text that contains the specific technical knowledge.
This paper presents a new VQA dataset based on demonstrating basic sewing machine operations.To our knowledge, THAVQA,which is also annotated with human visual attention, is the first task-oriented VQA dataset in German language. The dataset building is a first step in the larger project aimed at developing an AI-assistant for a sewing machine workshop held at the Technische Hochschule Augsburg. This AI-assistant would support students when using sewing machines for the first time. For example, this could mean answering questions about basic machine settings or explaining fundamental sewing skills. Our dataset poses unique challenges for VQA models and is almost unique in the state-of-the-art VQA datasets since it is user- and task-oriented: the questions collected are those that a real user would ask for help while using the sewing machine. The process of operating the sewing machine was decomposed in a script into steps and sub-steps that were recorded and on which questions and answers were annotated. Specialized knowledge of the process and understanding of spatial and temporal relationships is required for answering the questions collected. In addition, the limited visual variety of the video scenes and the specialized language and dictionary challenge the models for VQA. Annotating human attention in the video inputs of VQA models has recently been shown to improve their performance in user- and task-oriented datasets. In our dataset, the workshop instructor's eye gaze has been used as a proxy for human visual attention. The concept behind it is that visual human attention integrated as input into models for VQA can help the model distinguish between video frames, especially in datasets in which recorded scenes are very similar to each other as there are few participants and staged events. Our paper also provides a first assessment of the VQA performance of the pre-trained MLLM Gemini 1.5 Pro on THAVQA. Indeed, new releases of LLMs, such as Gemini 1.5 but also GPT-4, Llama 2, or Claude 3, now allow for visual inputs, making it possible to perform VQA tasks using pre-trained models directly. To sum up, this paper presents: A new dataset with third-person videos of an instructor operating a sewing machine and first-person videos annotated with visual human attention, QA pairs in German, and a script in German of the steps required to operate the machine. An evaluation of the performance of a pre-trained MLLM on generating open-ended answers from questions and videos of our dataset.
We provide a new task-oriented, German-language VQA dataset on demonstrations of sewing machine operation with open-ended human QA pairs and human visual attention: THAVQA. We then compared the VQA performance of Gemini 1.5 Pro on THAVQA varying the model inputs. We found that the task-oriented scenario of THAVQA was specific enough, such that the model could not rely on only its inherent knowledge to generate satisfactory responses. The questions contained in our dataset were over the capacity of the model to reason about the video data. Combining textual instructions with a first person video resulted in the best performing model across all reasoning types of questions. When looking towards the design of a VQA model for a future, practical sewing machine assistant, video inputs could therefore be used mainly to improve the model's perception abilities, while a retrieval system for textual information could provide the necessary specialized knowledge.
1
Language Models
660_2024
2,024
Giuseppe Attanasio, Pieter Delobelle, Moreno La Quatra, Andrea Santilli, Beatrice Savoldi
ItaEval and TweetyIta: A New Extensive Benchmark and Efficiency-First Language Model for Italian
ENG
5
1
0
Instituto de Telecomunicações, KU Leuven, Leuven.AI, Kore University of Enna, Sapienza Università di Roma, Fondazione Bruno Kessler
6
1
0
2
Giuseppe Attanasio, Pieter Delobelle
1
Pieter Delobelle
Italy, Portugal, Belgium
Lisboa, Leuven, Enna, Rome, Trento
Current development and benchmarking efforts for modern, large-scale Italian language models (LMs) are scattered. This paper situates such efforts by introducing two new resources: \itaeval, a comprehensive evaluation suite, and \tweetyita, an efficiency-first language model for Italian. Through \itaeval, we standardize evaluation across language understanding, commonsense and factual knowledge, and social bias-related tasks. In our attempt at language modeling, we experiment with efficient, tokenization-based adaption techniques. Our \tweetyita shows encouraging results after training on as little as 5G tokens from natural Italian corpora. We benchmark an extensive list of models against \itaeval and find several interesting insights. Surprisingly, i) models trained predominantly on English data dominate the leaderboard; ii) \tweetyita is competitive against other forms of adaptation or inherently monolingual models; iii) natural language understanding tasks are especially challenging for current models. We release code and data at and host a live leaderboard at . \def\thefootnote{*}\makeatletter\def\Hy@Warning#1{}\makeatother\footnotetext{This research has been carried out by the participants of the ItaLLM and ItaEval research sprints.}%\def\thefootnote{\arabic{footnote}
``The strength of the team is each individual member. The strength of each member is the team.'' -- Phil Jackson The increasing availability of Italian corpora and related resources has sparked new interest in advancing the state of the art for language models. Various works have prioritized different approaches. \citet{sarti-nissim-2024-it5} build a T5 model AUTHOR from scratch and use standard fine-tuning for task specialization. More recent works experiment with efficient instruction fine-tuning AUTHOR or continual-learning AUTHOR starting from autoregressive monolingual English models. Community-driven efforts or .} and multilingual models that include Italian AUTHOR among their pretraining corpora complete the picture. Despite many modeling contributions, insights on evaluation remain partial and broadly scattered. Test-beds in \citet{sarti-nissim-2024-it5} include downstream language understanding tasks (e.g., text summarization or style transfer) but lack commonsense and factual tests, which are instead commonly central components of modern language model development. or Apple's OpenELM AUTHOR.} Some works follow this line AUTHOR while others lack a systematic quantitative evaluation AUTHOR. In this landscape, we are thus left with a puzzling scenario and several open questions: What is the current state-of-the-art model? Does a new state-of-the-art exist at all? How are ``better'' or ``worse'' even measured? Which are the most critical weak spots for Italian state-of-the-art models? Which language training or adaptation technique yields better results for Italian? Leaving these paramount questions unanswered risks running computationally and environmentally expensive adaptation experiments with limited returns due to duplicated efforts or prioritization of dead ends. [!t] \centering \includegraphics[width=0.7\linewidth]{img/ita_eval.drawio-4.pdf Tasks challenge models on Natural Language Understanding (left), Commonsense and Factual Knowledge (center), and Bias and Fairness (right) datasets. Data comes from Italian sources or English corpora, which were machine-translated (robot icon). Both pre-existing and new (star icon) tasks are included. This paper introduces two community-built resources to clarify the current development and evaluation of Italian language models. First, we release a new extensive evaluation suite to address the lack of multi-faceted assessment for Italian. \itaeval (v1.0) includes i) natural language understanding tasks (for comparability with existing benchmarks), ii) commonsense- and factual knowledge-oriented tests (to align with new evaluation requirements for language models), and iii) bias, fairness and safety tests, which are often overlooked dimensions. %\itaeval's The suite includes 18 tasks, built upon both ``native'' (i.e., datasets whose data is originally collected in Italian) and machine-translated datasets. To gain a more nuanced view of the types of adaptation to Italian, we release \tweetyita, a new efficiency-oriented 7B autoregressive, monolingual language model. Based on lightweight En\rightarrowIt token replacement, \tweetyita achieves surprising results after running language adaptation on as little as 5G Italian tokens. \paragraph{Contributions. We release \itaeval v1.0, a new evaluation suite for Italian language models and run several language models against it. We release a new efficiency-oriented 7B language model and prove that token mapping is an efficient and competitive adaptation alternative for En\rightarrowIt model conversion. Code and data are released under a permissive license to foster research.
In this work we introduced \itaeval (v1.0), an evaluation suite for Italian language models, and \tweetyita, an efficiency-first language model tailored for Italian. \itaeval standardizes evaluations across tasks in natural language understanding, commonsense and factual knowledge, and social bias. Empirical results show that \tweetyita performs competitively, demonstrating the effectiveness of efficient adaptation techniques. Interestingly, models trained mainly on English data lead the evaluation leaderboard, indicating the strength of cross-lingual training. We believe these contributions will help clarify the evaluation landscape for Italian language models and encourage further research. Looking ahead, we plan to expand \itaeval to enhance its scope and detail of evaluation. \itaeval and \tweetyita are the result of a joint effort of members of the \includegraphics[height=2ex]{img/logo.png} ``Risorse per la Lingua Italiana'' community (): we thank every member who dedicated their time to the project. We thank CINECA for providing the computational resources (ISCRA grant: HP10C3RW9F). The Portuguese Recovery and Resilience Plan supported the work by Giuseppe Attanasio through project C645008882-00000055 (Center for Responsible AI) and by Fundação para a Ciência e Tecnologia through contract UIDB/50008/2020. Beatrice Savoldi is supported by the PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU. \appendix
1
Language Models
661_2024
2,024
Daniel Russo, Oscar Araque, Marco Guerini
To Click it or not to Click it: An Italian Dataset for Neutralising Clickbait Headlines
ENG
3
0
0
Università di Trento, Fondazione Bruno Kessler, University of Madrid
3
1
0
1
Oscar Araque
0
0
Italy, Spain
Trento, Madrid
Clickbait is a common technique aimed at attracting a reader's attention, although it can result in inaccuracies and lead to misinformation. This work explores the role of current Natural Language Processing methods to reduce its negative impact. To do so, a novel Italian dataset is generated, containing manual annotations for classification, spoiling, and neutralisation of clickbait. Besides, several experimental evaluations are performed, assessing the performance of current language models. On the one hand, we evaluate the performance in the task of clickbait detection in a multilingual setting, showing that augmenting the data with English instances largely improves overall performance. On the other hand, the generation tasks of clickbait spoiling and neutralisation are explored. The latter is a novel task, designed to increase the informativeness of a headline, thus removing the information gap. This work opens a new research avenue that has been largely uncharted in the Italian language.
Accuracy and truthfulness are essential characteristics of journalism. Nevertheless, in an effort to improve revenue, a large number of newspapers and magazines publish clickbait articles, a viral journalism strategy that seeks to attract users to click on a link to a page through tactics such as sensationalist stories and catchy headlines that act as bait. The use of these tactics harms the quality of news pieces and thus hinders the ability of citizens to obtain reliable and objective information. The literature distinguishes between two main types of clickbait. (i) Classical clickbait AUTHOR embeds within the headlines information gaps, also known as curiosity gaps AUTHOR, in order to arouse curiosity in the reader that is forced to access the article's content which is ultimately disappointing. Classical clickbait usually makes use of hyperbolic language, caps lock, demonstrative pronouns and superlative to grasp the user's attention AUTHOR. (ii) Deceptive clickbait AUTHOR refers to headlines that resemble traditional media headlines by offering a summary of the article, still leading to content that differs from the reader's expectations. These headlines promise high news value but deliver content with low news value, resulting in reader disappointment. Although clickbait headlines are considered one of the less harmful forms of fake news, as their main goal is to increase profit by driving traffic to their website AUTHOR, they can sometimes pose a danger, especially when they deal with potentially harmful topics such as health and science. To address this problem, Natural Language Processing techniques have been widely employed to detect clickbait headlines, with a particular focus on the English language AUTHOR. \citet{hagen-etal-2022-clickbait} proposed the clickbait spoiling task, i.e., the generation of a short text that satisfies the curiosity induced by a clickbait post. In light of this, this work addresses the issue of clickbait in the Italian language, studying its characteristics and the possibilities of current technology to reduce its negative impact. In doing so, we have generated a novel Italian dataset that gathers a large collection of clickbait articles, which is made public for the community to use~}. We named the dataset ClickBaIT. This dataset contains manually annotated instances as clickbait/non-clickbait, as well as manually generated spoilers and neutralised headlines. We have also performed a thorough multilingual evaluation, exploiting the availability of English data to complement our dataset in the task of clickbait detection. Finally, this work also explores the use of our annotated dataset and large language models to automatically generate both spoilers and, as a novel task, a neutralised version of clickbait headlines. A graphical illustration of the experimental design is presented in Figure fig:exp_design. [t!] \centering \includegraphics[width=\textwidth]{exp_clic_example.pdf , spoiler generation, and clickbait neutralisation. The robot icon represents the language model used for either classification or generation. We utilized DistilBERT and Llama3-8B for task 1, and LLaMAntino-3-8B for tasks 2 and 3. The models were tested for generative tasks using zero-shot, few-shot, and fine-tuning configurations, except for question rewriting, for which we employed a few-shot approach.
This work presents ClickBaIT, a novel Italian dataset for clickbait modelling, as well as a diverse set of experiments to assess the effectiveness of current models for clickbait detection, spoiling and neutralisation. The dataset includes news articles that have been manually annotated to indicate the presence of clickbait, spoilers associated with clickbait headlines, and their respective neutral headlines. The experiments explore the effectiveness of current NLP methods for the modelling of clickbait headlines in Italian through ClickBaIT. The evaluation for clickbait detection shows how training data can be augmented in a multilingual setting, which leads to classification improvements that are in line with previous research AUTHOR. The generation experiments, for both spoiling and neutralisation, evidence that the evaluated model does benefit from in-domain knowledge extracted from the proposed dataset. As seen, these informed generations are more accurate and align better with the golden text. Considering the effect of clickbait, we argue that while there are initially harmless articles, lack of accuracy can have a detrimental effect on readers. This is clear when considering certain sensitive domains such as health. Thus, we hope that this work facilitates future research on the topic for example, by addressing the link between clickbait and misinformation, considering both in a unified framework.
15
Fact Checking and Fake News Detection
662_2024
2,024
Giulia Rizzi, Paolo Rosso, Elisabetta Fersini
From Explanation to Detection: Multimodal Insights into Disagreement in Misogynous Memes
ENG
3
2
1
Università di Milano Bicocca, University of Valencia
2
1
0
2
Giulia Rizzi, Paolo Rosso
0
0
Italy, Spain
Milan, Valencia
\underline{Warning: This paper contains examples of language and images that may be offensive.}\\ This paper presents a probabilistic approach to identifying the disagreement-related elements in misogynistic memes by considering both modalities that compose a meme (i.e., visual and textual sources). Several methodologies to exploit such elements in the identification of disagreement among annotators have been investigated and evaluated on the Multimedia Automatic Misogyny Identification (MAMI) AUTHOR dataset. The proposed unsupervised approach reaches comparable performances, and in some cases even better, with state-of-the-art approaches, but with a reduced number of parameters to be estimated.\\ The source code of our approaches is publicly available\footnotemark[2].
Hate detection has been a serious concern in recent years, penetrating internet platforms and causing harm to individuals across various communities. Users found in the online environment new modes of representation to express various types of hatred, including the more deeply rooted ideologies and beliefs with historical origins, for example towards women AUTHOR.\\ Detecting abusive language has become an increasingly important task. The challenges introduced by the new modes of representation, which require a multimodal analysis, are further compounded when considering the subjectivity of the task. The subjectivity of the task derives from the fact that individuals' perception of what characterizes a message of hate varies widely. Such diversification is reflected in the labeling phase in the form of disagreement among annotators. Identifying elements within the sample that can lead to disagreement is of paramount importance for several reasons. For content that can lead to disagreement, specific annotation policies might be introduced, and the number of annotators might be enlarged to capture multiple perspectives AUTHOR. \\ In this work, we propose a methodology to identify the disagreement-related elements in multimodal samples by exploring both visual and textual elements in the Multimedia Automatic Misogyny Identification (MAMI) dataset AUTHOR. Moreover, four different strategies to exploit the presence of such elements in the identification of disagreement are investigated.
This paper proposes a probabilistic approach to identify disagreement-related elements in multimodal content. The proposed approach allows for the identification of elements that could be used as a proxy to identify samples that might be perceived differently by the annotators, and therefore, that could lead to disagreement. Achieved results highlight the difficulty of the task, denoting the need for a more advanced approach. Future work will include different strategies for image analysis in order to provide a better description of the image itself in all the elements that compose it. Furthermore, a study of the compositionality might be carried out to better represent the relationship among such elements inside the meme. The sense of a meme is often derived from the meanings of its individual parts (i.e. the image and text) and the way they are combined. By analyzing how different elements interact and contribute to the overall message, it is possible to gain a deeper understanding of how the meaning is represented within the different modalities. This will help in identifying complex patterns and improve the accuracy of classification models. We acknowledge the support of the PNRR ICSC National Research Centre for High Performance Computing, Big Data and Quantum Computing (CN00000013), under the NRRP MUR program funded by the NextGenerationEU. The work of Paolo Rosso was in the framework of the FairTransNLP-Stereotypes research project (PID2021-124361OB-C31) funded by MCIN/AEI/10.13039/501100011033 and by ERDF, EU A way of making Europe. \appendix
6
Sentiment, Emotion, Irony, Hate
663_2024
2,024
Chiara Ferrando, Marco Madeddu, Beatrice Antola, Sveva Silvia Pasini, Giulia Telari, Mirko Lai, Viviana Patti
Exploring YouTube Comments Reacting to Femicide News in Italian
ENG
7
5
1
Università di Torino, Università di Padova, Università di Pavia, Università del Piemonte Orientale
4
0
0
0
0
0
0
Italy
Turin, Padua, Pavia, Vercelli
In recent years, the Gender Based Violence (GBV) has become an important issue in modern society and a central topic in different research areas due to its alarming spread. Several Natural Language Processing (NLP) studies, concerning Hate Speech directed against women, have focused on misogynistic behaviours, slurs or incel communities. The main contribution of our work is the creation of the first dataset on social media comments to GBV, in particular to a femicide event. Our dataset, named GBV-Maltesi, contains 2,934 YouTube comments annotated following a new schema that we developed in order to study GBV and misogyny with an intersectional approach. During the experimental phase, we trained models on different corpora for binary misogyny detection and found that datasets that mostly include explicit expressions of misogyny are an easier challenge, compared to more implicit forms of misogyny contained in GBV-Maltesi. \noindent \textbf{Warning}: This paper contains examples of offensive content.
Nowadays, the term Gender Based Violence (GBV) is used to identify all forms of abuse based on gender hatred and sexist discrimination AUTHOR. Scholars in social science have defined as ``rape culture'' the society that normalizes sexist behaviours: from more common occurrences like victim blaming, slut shaming and gender pay gap %at the base, through catcalling, stalking, stealthing, to the apex of violence with femicide AUTHOR. While general violent crimes decreased over time, GBV did not, alarming various bodies in modern society}. A report from the EU commission} states that 31%, 5% and 43% of European women suffered respectively from physical, sexual and psychological violence. Regarding the Internet sphere, a survey found that 73% of women journalists experienced online violence (threats, belittling, shaming,...) AUTHOR. These statistics become even more alarming when we consider studies that show the correlation between misogynistic online posts and GBV AUTHOR. \\ Like other countries, Italy is affected by GBV, with the national observatory managed by the ``Non Una di Meno'' association reporting 117 femicides in 2022, 120 in 2023 and more than 40 until June 2024}. Several studies about Hate Speech (HS) directed towards women often focus on developing taxonomies AUTHOR rather than investigating low resource subjects in computational linguistics like GBV. These works often gather corpora by keyword search of gender slurs AUTHOR, retrieving comments left on misogynistic spaces like incel blogs AUTHOR or considering messages directed towards %controversial popular women figures highly debated on social media AUTHOR. As GBV is a broad topic, we want to clarify that we focus on GBV in Western societies, particularly in Italy. The main goal of this project is to show what is the current perception of femicides expressed through comments on social media, focusing on the specific case of Carol Maltesi. We chose this femicide because the victim was a sex worker, meaning that she presented an intersectional trait, and it was a popular case in the media, enabling us to select enough material for the study. Further, we want to highlight how the socio-demographic characteristics of the victims determine the way they are described and how this influences the perception of the news. For instance, victim's features such as age, job, origin, skin color, nationality, religion have different weight and determine the lesser or greater spread of the news AUTHOR. To overcome the cited issues in current literature, in this research we considered the phenomenon by focusing on users' reactions in social media to news about femicides. We collected YouTube comments in response to videos talking about a specific case. In order to overcome the constraints of traditional sentiment analysis schemas, we annotated the data following a new semantic grid that can be used as a standard for comments regarding GBV. In the experimental phase of this work, we created models based on different Italian misogyny datasets (including ours). The goal of such experiments is to analyze the different features of these corpora and what forms of misogyny are harder to detect. We performed both a quantitative and qualitative analysis of the results. In the next sections, we describe: related work on hate speech and misogyny detection(Section sec:Related), the annotation scheme and both a quantitative and qualitative analysis of the dataset (Section sec:Dataset), and the results obtained in our experiments (Section sec:Experiments). Lastly, we present some conclusions and delineate possible future developments (Section sec:conclusions).
In this paper, we presented GBV-Maltesi which is the first dataset regarding social reactions to GBV, in particular to a femicide case. The topic was chosen to shed light on the importance of having misogyny corpora that include forms of sexism that are more implicit and complicated to detect compared to the existing ones that focus on slurs and offensive terms. We also focused on the intersectionality aspects to better explore online hate. GBV-Maltesi is composed of 2,934 comments all annotated by 5 annotators and it is available at . In order to overcome limitations of generic semantic schema, the corpus has been annotated following a new schema specifically created for cases of GBV. In the experimental phase of our work, we created different misogyny binary classifiers and tested them in a cross-dataset way. We found that datasets gathered on keyword collection are easier benchmarks as the model showed bias towards slurs and not identifying more implicit cases of misogyny. This research on online discourse about GBV is not meant to be exhaustive, as several questions are still open. As future works, we intend to focus on how different framing of news can cause different online reactions, analyzing the differences between video transcripts of femicide news and the comments collected, in terms of words used, implicit references, attributions of guilt and descriptions of the people involved in the story. We also intend to gather more annotated corpora regarding femicides to explore how other characteristics of the victim (e.g., origin or skin color) and time of the murder differently influence the online reactions. In this regard, we intend to explore the question by investigating whether and how the discourse on misogyny changes depending on whether it is addressed to living or dead women (i.e., Giulia Cecchettin femicide and abusive discourse against her sister, Elena Cecchettin). Lastly, we would like to extend our research by following an intersectional approach, considering all the dimensions and characteristics that make up the identity of both victim and perpetrator. To conclude, we strongly advocate the importance of write the news correctly, as this has deep consequences on the readers' perception and the way they talk about it.
6
Sentiment, Emotion, Irony, Hate
664_2024
2,024
Daniel Scalena, Elisabetta Fersini, Malvina Nissim
A gentle push funziona benissimo: making instructed models in Italian via contrastive activation steering
ENG
3
2
0
Università di Milano Bicocca, University of Groningen
2
1
0
2
Daniel Scalena, Malvina Nissim
0
0
Italy, Netherlands
Milan, Groningen
Adapting models to a language that was only partially present in the pre-training data requires fine-tuning, which is expensive in terms of both data and computational resources. As an alternative to fine-tuning, we explore the potential of activation steering-based techniques to enhance model performance on Italian tasks. Through our experiments we show that Italian steering (i) can be successfully applied to different models, (ii) achieves performances comparable to, or even better than, fine-tuned models for Italian, and (iii) yields higher quality and consistency in Italian generations. We also discuss the utility of steering and fine-tuning in the contemporary LLM landscape where models are anyway getting high Italian performances even if not explicitly trained in this language.
The strong rise in capabilities of the latest large language models (LLMs) has brought significant improvements in a wide variety of downstream tasks. These abilities mainly derive from the instruction-tuning procedure (IT), i.e., model fine-tuning on instruction datasets, and enable the models to follow user-prompted instructions. Most LLMs, however, are mainly pre-trained and fine-tuned in English, and while other high-resource languages are included in the training data, they are not present to the extent needed to achieve out-of-the-box performances comparable to English. A strategy to address this has been, in the past few years, to fine-tune models with language-specific instructions, such as the Stanford Alpaca dataset AUTHOR, which has been automatically translated in multiple languages -- the Italian version of it has been used to train the Llama~2-based Camoscio model AUTHOR. A combination of \sim 240K training instances from three automatically translated instruction datasets was used to train the latest Llamantino AUTHOR, the most recent Llama~3-based instruction-tuned model for Italian. This approach has proven effective, but using large amounts of machine-translated texts is far from optimal: although the translation is generally good for high-resource languages, the language's unique linguistic and cultural aspects are often not represented by the training data. In addition, one must consider the usual substantial (computational) costs associated with large datasets. With recent developments in interpretability research, new approaches are arising to localize and steer different language model aspects. These techniques mainly work with an inference-time injection, allowing for targeted interventions during the generation phase without incurring the high costs associated with any additional training. Such techniques, relying on the assumption that models are already capable of performing specific tasks, aim at enhancing some of the internal activations leading to specific solutions, thereby also increasing overall performance. They have proved successful towards specific tasks, such as model detoxification, but also toward more generalist and wide-ranging tasks AUTHOR. We explore the potential of steering for Italian-instructing a pre-trained LLM as an alternative to fine-tuning, adopting a steering technique based on contrastive examples. We observe that this approach, with much less data~( \ll 100 instances instead of 240K) and no additional training required, enables performances comparable to standard fine-tuning approaches and yields high-quality Italian generations.
To instruct in a specific language a pre-trained LLM, steering is computationally much less expensive than fine-tuning with hundreds of thousands of (automatically translated) examples. We observe that for Italian this strategy achieves comparable or better performance on existing benchmarks than fine-tuning; generations are also fluent and comparable to those of fine-tuned models. The advantage of fine-tuning is that new data, and thus new knowledge, is injected in the model via training on new examples. At the same time, this might also trigger so-called catastrophic forgetting, yielding degradation in the output. We suggest that in the context of creating a new language-specific instructed LLM, this advantage makes sense only insofar culturally relevant and native data is used in the fine-tuning phase, so that the model can truly be enriched with language-specific knowledge, both grammatically and pragmatically. If translated data must be used, then it is incredibly more effective to use steering which requires much fewer examples (less than 0.5%) and a simple inference-time injection, making this an accessible method for virtually any language. Using native examples for the steering procedure, and possibly style-specific examples, might also yield interesting results.
1
Language Models
665_2024
2,024
Gabriele Sarti, Tommaso Caselli, Malvina Nissim, Arianna Bisazza
Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses
ENG
4
2
0
University of Groningen
1
1
1
4
Gabriele Sarti, Tommaso Caselli, Malvina Nissim, Arianna Bisazza
0
0
Netherlands
Groningen
Rebuses are puzzles requiring constrained multi-step reasoning to identify a hidden phrase from a set of images and letters. In this work, we introduce a large collection of verbalized rebuses for the Italian language and use it to assess the rebus-solving capabilities of state-of-the-art large language models. While general-purpose systems such as LLaMA-3 and GPT-4o perform poorly on this task, ad-hoc fine-tuning seems to improve models' performance. However, we find that performance gains from training are largely motivated by memorization. Our results suggest that rebus solving remains a challenging test bed to evaluate large language models' linguistic proficiency and sequential instruction-following skills.
Complex games such as chess and Go have long been a source of inspiration to develop more flexible and robust AI systems AUTHOR. Recent developments in NLP suggested that creative language games could be exploited as promising benchmarks for quantifying the ability of large language models (LLMs) to carry out multi-step knowledge-intensive reasoning tasks under pre-specified constraints AUTHOR. While crossword puzzles have been historically the main focus of such efforts AUTHOR, other categories of linguistic games received only marginal attention, especially for languages other than English. A prominent example of less-studied language games is the rebus, a visual puzzle combining images and graphic signs to encode a hidden phrase. Indeed, rebus solving is a complex, multi-step process requiring factual knowledge, contextual understanding, vocabulary usage, and reasoning within pre-defined constraints -- a set of fundamental skills to address a variety of real-world tasks. In this work, we conduct the first open evaluation of LLMs' rebus-solving capabilities, focusing specifically on the Italian language. We propose a novel strategy to derive text-only verbalized rebuses from transcribed intermediate rebus solutions and use it to produce a large collection with more than 80k verbalized rebuses. We then evaluate the rebus-solving skills of state-of-the-art LLMs, including open-source systems and proprietary models, via few-shot prompting. Moreover, we fine-tune a small but capable LLM on verbalized rebus solving, outperforming state-of-the-art systems by a wide margin. Finally, we conduct a fine-grained assessment of LLMs' sequential reasoning steps, explaining model performance in terms of word complexity and memorization. Beyond rebus solving, our evaluation sheds light on the limits of current LLMs in multi-step reasoning settings, highlighting challenges with their application to complex sequential instruction-following scenarios.{Github} and \href{https://huggingface.co/collections/gsarti/verbalized-rebus-clic-it-2024-66ab8f11cb04e68bdf4fb028}{Huggingface}
This work introduced a verbalized rebus-solving task and dataset for evaluating LLMs' sequential instruction following skills for the Italian language. We crafted a large collection of 83k verbalized rebuses by combining rebus transcriptions with crossword definitions and used it to evaluate the rebus-solving skills of state-of-the-art LLMs. Our experiments revealed the challenging nature of this task, with even the most capable prompted models achieving only 24% accuracy on solutions. While fine-tuning a smaller LLM dramatically improved performance to 51% solution accuracy, our analysis uncovered that these gains were largely driven by memorization and do not generalize to out-of-distribution examples. These results suggest important limitations in the generalization capabilities of current systems for sequential instruction following tasks. Our manual analysis further shows that LLMs seldom account for length constraints when solving definitions, despite the fundamental role of these cues in restricting the pool of possible words. These results suggest that search-based approaches accounting for constraints more explicitly might improve puzzle structure adherence, as previously shown by~\citet{crossword-mcts}. Other augmentation techniques employing LLM reformulation skills can also be explored to mitigate overfitting. Future work in this area should focus on expanding similar evaluations to a wider set of languages, input modalities, and puzzle categories, creating a comprehensive benchmark to test LLMs' puzzle-solving skills. Importantly, the task of solving visual rebuses and their more convoluted variants) or dynamic relations derived from multi-scene analysis (stereorebus)} remains far beyond the current capabilities of vision-language models. Hence, solving these puzzles automatically can be considered an important milestone in developing multimodal AI systems for constrained multi-step reasoning tasks. Our results confirm that the challenging nature of rebuses, even in their verbalized form, makes this task valuable for assessing future progress in LLMs' linguistic proficiency and sequential reasoning abilities. Finally, our rebus-solving LLM can facilitate future interpretability work investigating the mechanisms behind factual recall and multi-step reasoning in transformer models AUTHOR. \paragraph{Limitations Our analysis was limited to a relatively small set of models, and a single prompt template obtained after minimal tuning. Further experiments are needed to verify that memorization patterns after fine-tuning remain relevant for other model sizes, prompt formats, and training regimes, particularly for full-weight training approaches. Gabriele Sarti and Arianna Bisazza acknowledge the support of the Dutch Research Council (NWO) for the project InDeep (NWA.1292.19.399). Arianna Bisazza is further supported by the NWO Talent Programme (VI.Vidi.221C.009). We are grateful to the \href{http://www.enignet.it/home}{Associazione Culturale ``Biblioteca Enigmistica Italiana - G. Panini''} for making its rebus collection freely accessible on the Eureka5 platform, and to Valeriya Zelenkova for her valuable comments on the first version of this work. We also thank the CLiC-it 2024 reviewers for their valuable feedback. \appendix
1
Language Models
666_2024
2,024
Giorgia Albertin, Elena Martinelli
Exploring the Use of Cohesive Devices in Dementia within an Elderly Italian Semi-spontaneous Speech Corpus
ENG
2
2
1
Università di Bologna
1
0
0
0
0
0
0
Italy
Bologna
The study of language disruption in dementia, aimed at individuating which features correlate with cognitive impairment, is a growing area in computational linguistic research. Still, it needs a further development in analyzing some discourse phenomena that also undergo deterioration, and can help expand our understanding of dementia-related speech and refine automatic tools. This paper explores the discourse property of cohesion by investigating three types of cohesive devices: reference, lexical iteration, and connectives. Ten features related to these categories have been defined and automatically extracted from an Italian corpus of semi-spontaneous speech collected from dementia patients and healthy controls. Some of the designed features have proven significant for the binary classification of the two groups and further quantitative analysis highlight interesting differences in the use of cohesive devices, that seem to be associated with cognitive decline.
Linguistics deficits commonly characterized neurodegenerative diseases from their onset. In Dementia, or Major Neurocognitive Disorder (DSM-5 AUTHOR), a syndrome of acquired and progressive impairment in cognitive function that interfere with independence in everyday life, language deterioration manifests itself within a broader framework of cognitive impairment, which could affects memory, visuo-spatial skills, executive functions and reasoning. Deficits both in verbal production and comprehension have been observed, despite the specificity of different Dementia's etiological subtypes, among which the most common is Alzheimer's Disease (AD), characterized with a primary impairment in episodic memory. In AD, for example, among the well-established linguistic deficits there are word-finding problems, which include anomia, the production of semantic paraphasias AUTHOR and the "on the-tip-of-the tongue" experience AUTHOR, low speech rate, poor word comprehension AUTHOR and, as the disease worsen, a generalized simplification of syntax AUTHOR. Also discourse and pragmatic level is affected by cognitive decline. Errors in referential cohesion has been registered, in particular regarding ambiguous use of pronouns AUTHOR. Coherence is compromised, especially in spontaneous speech: the discourse appears with an abundance of irrelevant details and the overt difficulty to mention the key concept or to refer to the topic, resulting in a lack of informativeness in communication AUTHOR. In recent years, speech analysis in cognitive decline has gained increasing importance in the development of low-cost and portable tools for dementia screening, also supported by the remarkable advancements in Natural Language Processing (NLP) and Machine Learning (ML) technologies AUTHOR. The refinement of classification systems goes hand in hand with the operationalization of linguistic features computed from oral productions, that need to be adapted to different languages. Regarding Italian, the OPLON (OPportunities for active and healthy LONgevity) [2014-2016] project was devoted to the automatic extraction of an extensive group of linguistic features from acoustic, rhythmic, readability, lexical, morpho-syntactic and syntactic levels, from a speech corpus of cognitively impaired patients and healthy peers AUTHOR. Analysis of the significance of the features highlighted that the acoustics ones largely correlated with the cognitive state of the subjects AUTHOR. Expanding the list of language levels covered to include speech properties would enrich the features used for classification and, in addition, could broaden our understanding of how cognitive decline manifests itself in verbal competence. Nevertheless, defining specific features of higher-level and complex phenomena is not trivial. Drawing inspiration from works that propose a "stratified" approach to discourse analysis, which individually considers macro-phenomena that intersect with one another AUTHOR, this paper will examine cohesion, the property of the superficial form of the text to reflect its internal unity AUTHOR. Cohesion assures continuity in discourse through a network of cohesive devices, which are mainly words or morphemes, that contribute to maintain semantic relations occurring in the text AUTHOR. Therefore, we proposed a method to design and formalize a set of cohesion features, with the aim of observing whether they contribute to discriminate the speech of individuals with dementia from healthy peers. Specifically, three types of elements, which Halliday \& Hasan AUTHOR indicate among the major contributors to cohesion, were taken into consideration: reference, lexical iteration and connectives. The implementation of measures based on cohesive devices is the first step towards the attempt to include discourse properties in the automatic analysis of language in cognitive decline. The study of their interaction with features of other linguistic levels is crucial to observe whether they have a positive impact on discrimination between dementia subjects and healthy subjects. The work presented in this paper, therefore, has to be intended as a preliminary analysis that will serve to pursue more sophisticated ML classification in the future.
In this work, we present a methodology for delineating linguistic features of cohesion to track and study changes in discourse properties in the speech of individuals with cognitive impairment compared to healthy peers. The research focused on three types of cohesive devices, i.e., reference, lexical iteration, and connectives, that were automatically extracted from a Italian corpus of semi-spontaneous speech from dementia subjects and controls, collected in Basilicata. Statistical significance for binary discrimination was computed applying the Kolmogorov-Smirnov test, and then adjusting the results with Bonferroni's method. The test shows that a feature of the repetitions of lemmas and the one related to the set of cohesive devices jointly considered contribute to distinguish the two groups. Moreover, the quantitative distribution of the cohesive devices reveals differences in the use of elements within the considered categories between PG and CG, which seem to highlight a general deterioration in discursive competencies associated with dementia. The results obtained provide a preliminary basis for further study of discourse properties in cognitive decline, with the aim of expanding the set of linguistic features that can be automatically extracted to other levels of language. This expansion is intended to refine digital systems that could be employed as support for the early diagnosis and monitoring of neurodegenerative diseases, potentially improving timely interventions for patients and their caregivers.
13
Multimodal
667_2024
2,024
Wolfgang S. Schmeisser-Nieto , Giacomo Ricci, Simona Frenda, Mariona Taulé, Cristina Bosco
Analysis of Implicit Stereotypes: A Corpus-Based Study for Italian
ENG
5
3
0
Universitat de Barcelona, Università di Torino, Heriot-Watt University, Aequa-tech
4
1
0
3
Wolfgang S. Schmeisser-Nieto, Simona Frenda, Mariona Taulé
1
Simona Frenda
Italy, Spain, United Kingdom
Barcelona, Turin, Edinburgh
Detecting stereotypes is a challenging task, particularly when they are not expressed explicitly. In this study, we applied an annotation schema from the literature designed to formalize implicit stereotypes. We analyzed implicit stereotypes about immigrants in two datasets: StereoHoax-IT and SterheoSchool, which are created from different sources. StereoHoax-IT consists of reactions on Twitter to specific hoaxes aimed at discriminating against immigrants, while SterheoSchool includes comments from teenagers on fake news generated in psychological experiments. We describe the annotation process, annotator disagreements, and provide both quantitative and qualitative analyses to shed light on how implicitness characterizes stereotypes in different texts. Our findings suggest that implicit stereotypes are often conveyed through logical linguistic relations, such as entailment and behavioral evaluations of immigrants.
Various recent NLP studies have focused on detecting stereotypes online, often in conjunction with forms of abusive language AUTHOR. The importance of tackling this phenomenon is due to its impact on social structures and the power of individuals. Therefore, detecting stereotypes can prevent their emergence and spread, and thereby have a positive impact on our society. In social psychology, a stereotype has been defined as a set of beliefs about others perceived as belonging to a different social group AUTHOR. It oversimplifies the features of the group and generalizes a particular feature, applying it to all its members AUTHOR. In contrast to the emotional component of prejudice and the behavioral component of discrimination, a stereotype is associated with the cognitive component of the triad AUTHOR. In language, stereotypes can be expressed explicitly or implicitly AUTHOR. Explicit stereotypes deliver a straightforward message, clearly revealing the associated traits, often using derogatory adjectives AUTHOR. In contrast, implicit stereotypes are more nuanced and indirect, requiring the reader to infer their meaning AUTHOR. These implicit stereotypes can be communicated through linguistic devices such as metaphor and irony AUTHOR, negation AUTHOR, or entailments AUTHOR. Recently, efforts have been made to formalize the strategies for expressing implicit stereotypes, with the goal of establishing standardized criteria for annotators AUTHOR. An example of explicit stereotype is "{[Gli immigrati] buttano via il cibo che gli danno per poi andare a mangiare i poveri cani, dove finiremo!"} } (extracted from StereoHoax-IT corpus), in which the generalization of the target group and the association with an action is expressed in a present tense with a habitual aspect. On the other hand, in the example "{Come noi rispettiamo loro e il colore della loro pelle, così loro che abitano nei nostri paesi dovrebbero portare rispetto nei nostri confronti."} } (SterheoSchool corpus), the stereotype is not overtly manifested, but it must be inferred through the evaluation of the in-group and an exhortative sentence. From a computational linguistics perspective, concerns have been raised about how to detect and process stereotypes, a task often considered closely related to the detection of abusive language or hate speech AUTHOR. \\ Alongside research on hate speech, the study of stereotype detection has increased, particularly within evaluation tasks AUTHOR. However, the detection of implicit stereotypes remains a significant challenge AUTHOR. There are several works that deal with stereotypes in more complex narratives, such as microportraits AUTHOR and political debates AUTHOR. The detection of implicitness has also been studied with reference to several other phenomena, in particular those characterized by subjectivity, such as irony AUTHOR. In this paper, we analyze the implicit manifestation of stereotypes targeting immigrants, using a well-defined annotation schema proposed by \citet{schmeisser2022criteria} and tested on a subset of comments from Spanish newspapers (DETESTS AUTHOR). This schema represents different criteria for determining the implicitness of stereotypes in an attempt to formalize the concept. Disentangling strategies of implicitness presents a significant challenge, often resulting in the identification of multiple categories within the same text. Our main contributions consist of expanding the annotation with topics of stereotypes about immigrants AUTHOR and the strategies to implicitness AUTHOR, as well as testing this schema on two existing Italian datasets. These datasets share the same domain as those used for Spanish, stereotypes about immigrants, and include data extracted from Twitter (now X) as reactions to specific hoaxes (StereoHoax-IT) and comments written by high school students to two examples of fake news artificially created within psychological experiments (SterheoSchool) as described in AUTHOR. Analyzing the annotated texts, we noted that implicit stereotypes appear to be conveyed especially through logical linguistic relations like entailment and the behavioral evaluation of immigrants in both datasets. Moreover, in most cases, the annotators needed to use contextual information to determine the presence of stereotypes. For example, in this case "{Che centra lui e Italiano!, può essere massacrato!}" } (StereoHoax-IT) the author of the message expresses a stereotype complaining that foreigners enjoy better treatment than Italians, who can indeed be "macellati" (slaughtered). The rest of the paper is organized as follows: Sections~sec:datasets and~sec:annotation describe the datasets and the annotation applied; Sections~sec:statistics and~sec:qualitative present quantitative and qualitative analyses of the annotated data; and Section~sec:conclusions summarizes the results and provides guidance regarding future work.
In this paper, we applied an annotation scheme for analyzing the implicitness of stereotypes against immigrants according to two main dimensions (i.e., topics and strategies for making the content implicit) to the Italian StereoHoax-IT and SterheoSchool corpora. Adding these two layers of annotation allowed us to observe that annotators need to use contextual information to determine the presence of stereotypes especially, when specific strategies have been used by the author of the message (irony/sarcasm, extrapolation, entailment/evaluation, and imperative/exhortative). Moreover, implicit stereotypes appear to be conveyed mainly through logical linguistic relations such as the entailment and behavioral evaluation of immigrants and, in fewer cases, via `imperative/exhortative', `irony/sarcasm' and `extrapolation.' As future work, we plan to perform a comparative analysis with the datasets in Spanish, which have already been annotated with this schema, in order to understand cultural analogies and differences in portraying immigrants as threats, enemies or victims.
6
Sentiment, Emotion, Irony, Hate
668_2024
2,024
Viola Gullace, David Kletz, Thierry Poibeau, Alessandro Lenci, Pascal Amsili
The Self-Contained Italian Negation Test (SCIN)
ENG
5
1
1
CNRS, Ecole Normale Supérieure, Université Sorbonne-Nouvelle, Università di Pisa, Scuola Normale Superiore, Université Paris Cité
6
1
0
4
Viola Gullace, David Kletz, Thierry Poibeau, Pascal Amsili
0
0
Italy, France
Montrouge, Pisa, Paris
Recent research has focused extensively on state-of-the-art pretrained language models, particularly those based on Transformer architectures, and how well they account for negation and other linguistic phenomena in various tasks. This study aims to evaluate the understanding of negation in Italian \url{bert}- and \url{robert}-based models, contrasting the predominant English-focused prior research. We develop the SCIN Set, an Italian dataset designed to model the influence of polarity constraints on models in a masked predictions task. Applying the SCIN Set reveals that these models do not adjust their behaviour based on sentences polarity, even when the resulting sentence is contradictory. We conclude that the tested models lack a clear understanding of how negation alters sentence meaning.
Compositionality is a fundamental feature of human language, based on the principle that the meaning of a complex expression derives from its parts and their respective arrangements. One notable compositional phenomenon is negation, formally defined as a semantic operator (or function) that reverses the truth-value of a sentence AUTHOR. Given its importance, understanding how well pretrained language models (PLMs) can grasp and apply this principle is crucial. These models achieve impressive performance across a wide array of language modeling tasks. Nonetheless, they often reveal to rely on shallow heuristics or exhibit other issues in handling specific aspects of language. A prominent bias in the body of research is that the vast majority of research on language models has predominantly concentrated on English. this focus raises concerns about the generalizability of findings to other languages which may be structurally different from English. Conducting similar experiments in other languages could provide valuable context and material for comparison, potentially highlighting language-specific effects or revealing new generalization. Therefore, we decide to undertake a new experiment focusing on Italian negation. Thus, in this article, we aim to explore whether the behavior of PLMs accurately models the polarity of sentences. We will investigate how the addition of negation to a sentence can alter its overall meaning (demonstrating the models’ capability to handle shifts in meaning due to structural changes). Given the limitations explained above, our work has deliberately chosen to concentrate on Italian. This choice not only addresses the need to explore how these models perform with languages other than English but also serves as a critical test for PLMs dedicated to Italian. We suspect that these models may not be as advanced or effective as their English counterparts, highlighting the need for further developments outside English. We adapt the test set developed for English by AUTHOR to Italian, creating the Self-Contained Italian Neg Set (SCIN Set). Using the dataset to evaluate BERT- and ROBERTA-based models for Italian, we find that these models are unable to adjust their prediction in response to constraints posed by negation, often generating contradictory text.
In this paper, we investigated the ability of several Italian PLMs to take negation into account in their predictions. To do this, we adapted to Italian the proposed by AUTHOR, which is based on minimal pairs of aligned sentences. Applying this test to six models enabled us to show that negation modifies their predictions, but that this does not happen consistently or in a way that is always coherent with the semantic effect that we expect negation to have on sentences. These results suggest a strong need to adapt these models to make them more sensitive to negation and its semantic consequences. Nevertheless, we also noted a fairly marked difference in performance from one model to another, correlated with the different corpora used to train them. We thus suggest that a lexical and statistical study of these corpora could shed further light on the behavior of the models. Lastly, it would be interesting to compare these results with the performance of generative models, in order to study the relative importance of the number of model parameters in relation to their architecture.
1
Language Models
669_2024
2,024
Dennis Fucci, Beatrice Savoldi, Marco Gaido, Matteo Negri, Mauro Cettolo, Luisa Bentivogli
Explainability for Speech Models: On the Challenges of Acoustic Feature Selection
ENG
6
2
0
Università di Trento, Fondazione Bruno Kessler
2
0
0
0
0
0
0
Italy
Trento
Spurred by the demand for transparency and interpretability in Artificial Intelligence (AI), the field of eXplainable AI (XAI) has experienced significant growth, marked by both theoretical reflections and technical advancements. While various XAI techniques, especially feature attribution methods, have been extensively explored across diverse tasks, their adaptation for the \textit{speech} modality is comparatively lagging behind. We argue that a key challenge in feature attribution for speech processing lies in identifying informative acoustic features. In this paper, we discuss the key challenges in selecting the features for speech explanations. Also, in light of existing research, we highlight current gaps and propose future avenues to enhance the depth and informativeness of explanations for speech.
Spoken language, as perhaps our most natural form of interaction, is the foundational element of many technologies we interact with in our daily lives, from virtual assistants to voice dictation. More recently, the emergence of highly capable speech foundation models has facilitated and expanded the adoption of speech technologies on an unprecedented multilingual scale. In light of this proliferation, a need arises to prioritize transparency and interpretability, qualities already demanded in the growing landscape of Machine Learning (ML). As a response, the field of eXplainable AI (XAI) has risen prominently, with the aim of facilitating understanding of the rationale behind model decisions and fostering users' trust. XAI is also reinforced by the establishment of norms and legal frameworks, as seen in the European Union's General Data Protection Regulation, which enshrines the "right to explanation," and the AI Act, which emphasizes transparency as a pivotal component of ML applications. XAI encompasses various tasks and methods, such as identifying relevant model components for specific predictions, understanding the information processed by these components, and determining which input elements guide the model's predictions. The latter task is the focus of feature attribution methods, which provide intuitive explanations by visualizing which input elements (e.g., pixels in an image or words in a sentence) have influenced the model’s predictions. These methods assign a score to each input feature, quantifying its importance or contribution to the output: higher scores indicate greater importance of the corresponding input features for generating the output. They can help identify potential causes for errors and unexpected behaviors, as well as analyze the model’s response to specific input properties. Overall, these explainability methods serve to present the reason why models make specific predictions by establishing a connection between input and output as a form of intuitive explanation for humans, thereby enhancing interpretability. Despite numerous efforts to differentiate the closely related concepts of explainability and interpretability, no consensus exists in the literature on their definitions. In this paper, we adopt a perspective where explainability refers to the process of extracting insights from a model's workings through specific techniques, while interpretability refers to the understanding process of those insights, crucial to make them actionable. Over time, ongoing efforts have aimed to refine feature attribution techniques and provide more effective explanations. However, it is essential to recognize that the effectiveness of feature attribution explanations relies not only on the techniques themselves but also on the informativeness of the input features used as explanatory variables. If an explanation highlights unintelligible or poorly informative features, it does little to enhance the understanding of the model's behavior. This can undermine key principles in XAI, such as accuracy—the property of correctly reflecting the factors that led the model to a specific decision, including all relevant information—and meaningfulness—the property of offering explanations that are comprehensible to the user. In fields involving images or texts, feature representations are typically constrained to pixels and words, respectively. However, for speech, multiple input representations can be adopted, each emphasizing different acoustic aspects. Indeed, a sequence of speech elements not only conveys the meaning of what is said (like words in a text) but also bears a wealth of additional information useful for both human understanding and automatic processing (e.g., intonation, loudness, speaking rate). Consequently, when employing feature attribution methods, the resulting explanations can vary significantly in shape and focus on more or less informative characteristics depending on the type of speech representation used. To date, research on feature attribution for speech is notably limited to few applications, including classification and generative tasks, which offer a somewhat fragmented picture in the choice of speech representations, thus providing limited insights on the relation between the features considered and the explanations based upon them. In light of the above, this paper reflects on the impact of the chosen acoustic features in explaining the rationale behind speech models, aiming to gain a deeper understanding of the trade-offs associated with acoustic features. By first offering a gentle introduction to the rich and multidimensional nature of speech and its digital representation, we identify current gaps and potential avenues for effectively incorporating this multidimensionality into XAI for speech models. Our discussion will focus on two critical factors: i) the amount of information these features provide about the model's behavior, which influences the richness of the explanations, and ii) the level of detail of such information, which determines the granularity of the explanations. We will also explore how these aspects impact both the accuracy and meaningfulness of the explanations, ultimately shaping their overall interpretability.
This paper has examined the role of acoustic features and their selection for explaining speech models.More specifically, we considered a specific subfield of XAI, namely, feature attribution, which connects input features to outputs as a form of explanation. Previous research has not explicitly addressed how to incorporate features into the explanation process within the speech domain, where input is encoded in more varied ways compared to other fields, such as text.This has led to diverse approaches, each with different implications for what can and cannot be explained about model behavior, and with the risk of not fully or accurately representing the model's functioning. By discussing the key characteristics of speech and the properties of the most adopted acoustic features, we argue that explanations should ideally encompass all available dimensions, particularly time and frequency, as both are essential for a comprehensive understanding of the models' rationale. We have also discussed challenges associated with aligning explanations at high granularity with human understanding, emphasizing solutions that provide flexibility in the analysis, allowing for adjustments between more or less detail as needed.Building on these insights, our ongoing research focuses on developing feature attribution techniques that operate on spectrograms at the finest possible unit level, integrating both time and frequency dimensions. Our aim is to generate explanations that are accurate and meaningful for experts, as well as adaptable for non-expert users. More broadly, we hope that our reflections will be beneficial and thought-provoking for researchers currently working in, or entering, the field of XAI for speech models, thereby contributing to a deeper understanding of the rationale behind these models.
13
Multimodal
670_2024
2,024
Simone Manai, Laura Gemme, Roberto Zanoli, Alberto Lavelli
IDRE: AI Generated Dataset for Enhancing Empathetic Chatbot Interactions in Italian language.
ENG
4
1
0
Università di Trento, Lutech-Softjam, Fondazione Bruno Kessler
3
0
0
0
0
2
Simone Manai, Laura Gemme
Italy
Trento, Genova
This paper introduces IDRE (Italian Dataset for Rephrasing with Empathy), a novel automatically generated Italian linguistic dataset. IDRE comprises typical chatbot user utterances in the healthcare domain, corresponding chatbot responses, and empathetically enhanced chatbot responses. The dataset was generated using the Llama2 language model and evaluated by human raters based on predefined metrics. The IDRE dataset offers a comprehensive and realistic collection of Italian chatbot-user interactions suitable for training and refining chatbot models in the healthcare domain. This facilitates the development of chatbots capable of natural and productive conversations with healthcare users. Notably, the dataset incorporates empathetically enhanced chatbot responses, enabling researchers to investigate the effects of empathetic language on fostering more positive and engaging human-machine interactions within healthcare settings. The methodology employed for the construction of the IDRE dataset can be extended to generate sentences in additional languages and domains, thereby expanding its applicability and utility. The IDRE dataset is publicly available for research purposes.
Emotional intelligence has been widely recognized as a crucial factor influencing human communication, impacting aspects such as behavioral choices and the interpretation of information [1]. Consequently, there has been a growing interest in developing chatbots capable of exhibiting empathetic responses [2] [3] [4]. While significant strides have been made in this direction, the integration of empathy into commercial chatbots remains challenging due to the rigid constraints imposed by business rules such as the response must not lose the original meaning and the dialogue must maintain structure. To address this limitation, one possible approach is to build a layer that rephrases the bot's response by increasing empathy without altering the structure or meaning of the underlying dialogue. This strategy offers the potential to enhance user experience and create a foundation for more sophisticated empathetic dialogue systems. To facilitate the development of such systems, a robust dataset containing empathetic responses is essential. Despite the increasing body of research on emotion recognition and generation in human-computer interaction, there is a notable absence of publicly available datasets specifically focused on empathy in chatbot interactions. This paper introduces the IDRE dataset, a new Italian language resource comprising human-bot interactions within the healthcare domain. The dataset is available publicly, and the address is provided in the Online Resource section. The dataset includes the user questions, original bot responses and corresponding empathetic reformulations for a total of 480 sentences, providing a valuable foundation for research and development in empathetic chatbot technology, see Table 1 for an example. The paper also elaborates on the methodology employed for dataset generation, highlighting its applicability to diverse domains and languages.
In this work, we have presented the creation of a dataset of sentences representing typical interactions with a healthcare chatbot. The dataset includes both user input sentences and empathetic responses generated by the chatbot. Human validation has confirmed the quality and usefulness of the dataset for developing and evaluating empathetic chatbots in the healthcare domain. This work presents a two-pronged contribution to the field of empathetic chatbots, specifically focusing on the Italian language. Firstly, it addresses the critical issue of data scarcity by providing a high-quality, annotated dataset for training and evaluating empathetic chatbots within a healthcare context. This dataset can be employed to fine-tune large language models (LLMs) such as Llama2, enabling them to generate responses with demonstrably enhanced empathetic qualities. The limitations of non-fine-tuned models are exemplified through the observation that they can produce factually incorrect or unempathetic sentences (e.g., " Il tuo corpo è vulnerabile al rischio del tumore al seno a causa della tua età avanzata, nonostante la tua vitalità e forza interiori. La storia familiare di tumori al seno nella tua famiglia e la tua condizione di obesità possono aumentare il rischio, come pure l'abuso di tabacco e alcool. Inoltre, la tua scelta di non avere figli o di averli dopo l'età di 35 anni può aggiungere ulteriore rischio al tuo corpo."). By leveraging the proposed dataset and selecting sentences with demonstrably high empathy scores, a targeted training set can be constructed specifically for this purpose. This, in turn, allows for the fine-tuning of the LLM, significantly improving its ability to generate empathetic responses in a healthcare setting. Secondly, the work contributes a rigorous human validation methodology for evaluating the effectiveness of empathy expression in chatbots. This methodology provides a valuable tool for researchers and developers working in this domain.
3
Chatbots and Dialogue Systems
671_2024
2,024
Hamit Kavas, Marc Serra-Vidal, Leo Wanner
Enhancing Job Posting Classification with Multilingual Embeddings and Large Language Models
ENG
3
0
0
Pompeu Fabra University, Adevinta Spain, Catalan Institute for Research and Advanced Studies (ICREA)
3
1
1
3
Hamit Kavas, Marc Serra-Vidal, Leo Wanner
2
Hamit Kavas, Marc Serra-Vidal
Spain
Barcelona
In the modern labour market, taxonomies such the European Skills, Competences, Qualifications and Occupations (ESCO) classification are used as an interlingua to match job postings with job seeker profiles. Both are classified with respect to ESCO occupations, and match if they align with the same occupation and the same skills assigned to the occupation. However, matching models usually struggle with the classification because of overlapping skills and similar definitions of occupations defined in the ESCO taxonomy. This often leads to imprecise classification outcomes. In this paper, we focus on the challenge of the classification of job postings written in Italian or Spanish against ESCO occupations written in English. We experiment with multilingual embeddings, zero-shot classification, and use of a large language model (LLM) and show that the use of an LLM leads to best results. Furthermore, we also explore an alternative automatic labeling method by prompting three top-performing LLMs to annotate the test dataset. This approach serves both as an experiment on the usability of automatic labeling and as an evaluation of the reliability of the automatically assigned labels, involving human annotators.
The modern labour market becomes more and more diverse. High-tech jobs demand novel skills and competences, which in their turn keep undergoing adaptations and modifications. Under these circumstances, accurately classifying job postings and CVs of job seekers (henceforth candidate experiences) that contain detailed technological specifications with remarkably similar yet distinct skills and experiences has evolved into a complex challenge. The overwhelming majority of job portals and employment agencies use either the European Skills, Competences, Qualifications and Occupations (ESCO) taxonomy or its US equivalent O*Net taxonomy} to classify job postings and candidate experiences in terms of job title labeled ESCO/O*Net occupations. Most of the proposals to automatic alignment of job postings with candidate experiences (or vice versa) also use ESCO or O*Net AUTHOR. %but they often do so indirectly by first classifying both the postings and the applications against the occupations and their skills in the chosen taxonomy and then matching the obtained ESCO/O*Net labels. Typically, they also leverage some of the additional information provided in the taxonomy, such as skills, competences, and qualifications -- although, e.g., \citet{yamashita2022james} focus solely on job (i.e., occupation) titles and then incorporate job transition trajectories and historical experiences, including start and end dates, from the job application. However, despite their wide use, both ESCO and O*Net taxonomies exhibit principle limitations for the task of automatic classification of job postings and candidate experiences because due to their tree structure they often fail to adequately distinguish between occupations that exhibit substantial skill overlaps. For instance, two job postings labeled as `data analyst' may appear similar but require different skills if one focuses on market research while the other concentrates on healthcare trends analysis. This issue is particularly pronounced when classification relies on a single label, such as the job title of an ESCO occupation, where skill overlaps undermine precise classification. Hence, employing multiple job titles and framing the problem as a multi-label classification task is imperative. %Although ESCO provides very detailed definitions, any classification method that does not incorporate these comprehensive descriptions, essential skills, and synonyms will miss critical details, rendering effective differentiation between occupations unattainable. This paper addresses the challenge of multilingual multi-label classification using Large Language Models (LLMs) for the alignment of Italian and Spanish job postings with English job titles encountered in the ESCO taxonomy. Multilingual class embeddings are explored to improve classification accuracy, aiming to provide the necessary contextual awareness and addressing the core limitations of taxonomies such as ESCO. %Classifying job postings with respect to ESCO job titles / occupations enables a recommendation system where similarly classified job postings are suggested to recruiters and job seekers. Furthermore, we explore an alternative automatic labeling method by prompting three top-performing LLMs to annotate the test dataset. This approach serves both as an experiment on the usability of automatic labeling and as an evaluation of the reliability of the automatically assigned labels, involving human annotators. To provide LLMs with domain-specific information and to mitigate hallucinations in the course of the classification of the job postings, we employ Retrieval Augmented Generation (RAG) AUTHOR, which combines information retrieval with a generative model. RAG serves two critical functions in our methodology. Firstly, it provides detailed definitions, including essential skills and synonyms for each ESCO occupation, selected through vector similarity as outlined in AUTHOR. Secondly, it ensures that the assigned job titles are restricted to titles within our predefined label space, i.e., standardized job titles defined in the ESCO taxonomy. %In other words, we propose augmenting Large Language Models (LLMs) with ESCO occupations as external knowledge sources, utilizing Retrieval-Augmented Generation (RAG) as an in-context learning approach, specifically through learning by analogy AUTHOR, for job posting classification. Our model incorporates job titles from the ESCO taxonomy as labels and selects an appropriate job title for each job posting in our dataset. \hmt{that was in English and scores were higher there, should we remove the following? The contributions of our work are: \bullet We explore the impact of using multilingual class embeddings derived from the ESCO taxonomy for the task of job posting classification.\\ \bullet We integrate RAG to provide LLMs with domain-specific information and eliminate the dependency on fine-tuning;\\ \bullet We show how the LLM response can be restricted to standardized job titles and thus how LLMs can be used for high quality job title classification that outperforms state-of-the-art proposals for this task. The remainder of the paper is structured as follows. In Section sec:rel-work, we present a concise overview of the related work. In Section sec:model, the model on which our work is based is outlined. Section sec:experiments describes the experiments we carried out, the results we obtained in these experiments, and their discussion. In Section sec:concls, finally, draws some conclusions from the presented work and outlines some directions for future research. In Appendix sec:abl, we present an ablation study in which we assess the comprehension of English ESCO job titles and its Spanish equivalents by our model. Appendix sec:postings provides, for illustration, examples of Italian job postings and predicted ESCO job titles. In Appendix sec:prompts, we present the signature used to prompt Large Language Models for pre-processing.
In this paper, we argued that the use of multilingual embeddings in combination with LLMs significantly enhances our ability to distinguish between very similar (or even identical) job titles that suggest different skills and competencies. Our experiments have shown that this is indeed the case, demonstrating that the combination of multilingual text embeddings similarity with the Llama-3 markedly exceeds the performance of other leading approaches in the field. In the future, we plan to apply the same approach to the analysis and classification of job candidate experiences. Once it is ensured that both job postings and candidate experiences can accurately be modeled using the embedded representation of the ESCO taxonomy, we plan to set the stage for a more direct and efficient alignment process between job postings and experiences of job seekers. Another interesting direction for future research is to analyze the lexical overlap between English domain-specific terms that appear in Italian and Spanish job postings and the English occupation descriptions in the ESCO taxonomy. Such an analysis would reveal whether job types with higher lexical overlap affect model accuracy, providing deeper insights into the multilingual nature of the task. \appendix
21
Gender and Inclusive Language Studies
672_2024
2,024
Eleonora Cappuccio, Benedetta Muscato, Laura Pollacci, Marta Marchiori Manerba, Clara Punzi, Chandana Sree Mala, Margherita Lalli, Gizem Gezici, Michela Natilli, Fosca Giannotti
Beyond Headlines: A Corpus of Femicides News Coverage in Italian Newspapers
ENG
10
10
1
Università di Pisa, CNR-ISTI, Università di Bari Aldo Moro, Scuola Normale Superiore
4
0
0
0
0
0
0
Italy
Pisa, Bari
How newspapers cover news significantly impacts how facts are understood, perceived, and processed by the public. This is especially crucial when serious crimes are reported, e.g., in the case of femicides, where the description of the perpetrator and the victim builds a strong, often polarized opinion of this severe societal issue. This paper presents \FMNews, a new dataset of articles reporting femicides extracted from Italian newspapers. Our core contribution aims to promote the development of a deeper framing and awareness of the phenomenon through an original resource available and accessible to the research community, facilitating further analyses on the topic. The paper also provides a preliminary study of the resulting collection through several example use cases and scenarios.
How newspapers and journalists present news plays a crucial role in shaping public understanding and perception of information. This is especially important when reporting serious crimes, such as femicides, where descriptions of the perpetrator and victim can create polarized opinions influencing readers' perceptions and interpretations of the event. According to Bouzerdan (2018), news media often report incidents of women’s homicides in a sensationalised manner, treating these crimes as isolated events rather than situating them within the bigger framework of violence against women. This narrative defies the global demands of human rights organisations to acknowledge and address this phenomenon as demanded by its intricate dynamics. Numerous countries have followed such recommendations only partially through the formal adoption of specific terminology such as femicide and feminicide in legal frameworks and public discourse. The two terms have related but distinct nuances of meaning. Femicide, a criminological concept initially coined in English by the feminist criminologist Diana H. Russell (Radford and Russell, 1992), denotes the murder of women by males due to their gender. Successively, the term femicide, translated in Castilian as femicidio or feminicide by the anthropologist Marcela Lagarde to attract political attention to the dire situation faced by women in Mexico (Lagarde, 2006), has gained global traction with varying interpretations. Yet, it consistently denotes a patriarchal impetus behind homicides and other forms of male violence against women, primarily emphasizing the sociological dimensions of abuse and the socio-political ramifications of the phenomenon. In the Italian language, the term femminicidio has been almost exclusively adopted, as evidenced by a Google Trends analysis comparing the search terms "femicidio" and "femminicidio" to queries regarding "femicide." An analysis of the phenomenon of femicide in the Italian context and, in particular, a linguistic investigation of it, are particularly relevant. Feminicide, a term used by the feminist movement in Italy since 2005, gained prominence in the media in 2011, especially thanks to the works of Barbara Spinelli (Spinelli, 2008). The CEDAW Committee, based on data from the Shadow Report on the Implementation of CEDAW in Italy, addressed recommendations to the Italian government on feminicide in its Concluding Observations. This was the first time the committee addressed a European state on feminicide, a category previously reserved for warnings to Central American countries. The challenges in accurately contextualising feminicide in Italy also stem from a prolonged absence of official data, resulting in sensationalism and the perception of a dramatic rise in the crime. This may induce an emergency narrative that obscures the inherent structural dimensions of the phenomenon, thereby undermining the very essence of the term (Spinelli, 2012). Media interpretations are essential for shaping a shared understanding across a vast audience, such as a whole country; hence, the examination of media discourse emerges as a significant analytical instrument on top of statistical evaluation of femicide data to understand the achievements and directions of state intervention towards the substantial granting of women's right to life (Abis, 2016). In this regard, Aldrete (2023) showed that there is a large body of empirical studies on femicide discourse across different socio-cultural contexts, which often justify the perpetrator's actions. Given the complexity of the phenomenon, a comprehensive investigation could be achieved by integrating media analysis with external data, such as demographics and current events, blending together researchers from different fields like computer science, social sciences, and complex systems science. The lack of accessible and relevant data specific to socio-cultural contexts where femicide is notably prevalent, such as in Italy, makes the task particularly challenging (Forciniti, 2023). This paper presents \FMNews, a new dataset of articles reporting femicides extracted from Italian newspapers. We conduct a preliminary analysis of the resulting collection through several example use cases and scenarios. The primary contribution is to deepen understanding and awareness of femicide from a socio-technical perspective. We seek to examine how prominent Italian news sources report on the issue in connection to the shaping of public perception, while also offering an innovative and accessible resource to facilitate future investigation within the research community. Furthermore, this study was designed to enable a multifaceted investigation covering the following three dimensions: Geographical, with the aim to explore potential variations in framing between local and national media outlets. Previous research has shown that Italian local daily newspapers often suppress the agency of the perpetrator, portraying the events as mere occurrences. We selected newspapers reporting news at both the national (e.g., Corriere della Sera, La Repubblica, La Stampa, Il Fatto Quotidiano, Il Giornale, and Il Post) and local levels, with local editions spanning across the whole Italian territory. Political, ensured by choosing national newspapers with varying political leanings. Temporal, where the time frame of national newspapers extends from November 2009 to February 2024, whilst that of the local ones ranges from November 2010 to February 2024.
In this contribution, we provided a novel dataset concerning the critical issue of femicide in Italy. Considering the absence of resources for conducting in-depth analyses on the subject, our intent was to bridge this gap and provide an original perspective for understanding and raising awareness about this severe phenomenon. As suggested by AUTHOR proposing a contribution within the Machine Learning domain responsibly and consciously means foremost acknowledging our own biases. In particular, we are referring to both the newspaper selection and choice of the terms used to extract the data, that certainly shaped the results (all design choices are justified in detail in Section dataset. A future outlook concerns the investigation of how both victims and perpetrators are framed from a linguistic perspective. Further analyses could regard identifying temporal and geographical patterns arising from media attention manifested through the coverage of femicides and comparing the framing of these events with the political leaning of the respective newspapers.
7
Lexical and Semantic Resources and Analysis
673_2024
2,024
Marco Cuccarini, Lia Draetta, Chiara Ferrando, Liam James, Viviana Patti
ReCLAIM Project: Exploring Italian Slurs Reappropriation with Large Language Models
ENG
5
3
0
Università di Napoli Federico II, Università di Perugia, Università di Torino, Università di Bologna
4
0
0
0
0
0
0
Italy
Naples, Perugia, Turin, Bologna
Recently, social networks have become the primary means of communication for many people, leading computational linguistics researchers to focus on the language used on these platforms. As online interactions grow, recognizing and preventing offensive messages targeting various groups has become urgent. However, finding a balance between detecting hate speech and preserving free expression while promoting inclusive language is challenging. Previous studies have highlighted the risks of automated analysis misinterpreting context, which can lead to the censorship of marginalized groups. Our study is the first to explore the reappropriative use of slurs in Italian by leveraging Large Language Models (LLMs) with a zero-shot approach. We revised annotations of an existing Italian homotransphobic dataset, developed new guidelines, and designed various prompts to address the LLMs task. Our findings illustrate the difficulty of this challenge and provide preliminary results on using LLMs for such a language specific task. \\ %\LaTeX{ \noindent \textbf{Warning}: This paper contains examples of explicitly offensive content. \noindent \textbf{Our positionality:} This paper is situated in Italy in 2024 and is authored by researchers specializing in Natural Language Processing (NLP). Beyond our academic work, we are sensitive to anti-hate speech issues. Our backgrounds fields are theoretical linguistics, computer science and NLP.
In recent years, social networks have become the primary means of communication for most people. With the daily growth of online interactions, it has become urgent to recognize and prevent the spread of offensive messages against different target groups based on gender, sex, sexual orientation, race, religion, language, or political orientation. Moreover, categorizing hate speech with clear-cut boundaries is overly simplistic, as it includes various forms of abusive language that imply disrespect and hostility. A recent challenge is finding a balance between detecting hate speech and preserving the free spread of ideas and opinions on the web, while promoting inclusive and fair language. Thiago et al. (2021) AUTHOR highlighted how automated analysis can misinterpret context, risking the censorship of marginalized groups languages, such as those of the LGBT+ community. Another study by Pamungkas and colleagues (2020) AUTHOR emphasized the importance of considering context in Natural Language Processing (NLP) tasks to avoid misinterpretations of word meanings, noting that the same swear word can be used both abusively and non-abusively. An example of this phenomenon is the semantic reappropriation, a practice in which terms historically used as slurs against a specific target group lose their offensive intent in certain contexts, by expressing a sense of belonging and solidarity within the group members AUTHOR. Although community visibility and the use of specific slang have been approached for years, to our knowledge only some hate speech studies specifically addressed slurs, and few focused on slurs semantic reappropriation AUTHOR. Nowadays, recognizing this kind of semantic shift through NLP tools is crucial to avoid the risk of removing not abusive speech in online contents, which could paradoxically harm marginalized users AUTHOR. Our study is the first with the aim of investigating reappropriative use of slurs in Italian, highlighting the need to take a step ahead from the existing abusive language detection models. Having in mind the capability of LLMs in classification task, we leveraged a LLM with a zero-shot approach in order to recognize the presence of reappropriative uses in our dataset. This study makes the following contributions: \item We partially revised the original annotation previously conducted on the HODI dataset (Homotransphobic Dataset in Italian) AUTHOR, by developing new annotation guidelines. \item We used a LLM specifically fine-tuned on Italian language by leveraging prompt engineering. \item From a linguistic point of view, we showed why certain features of the Italian language make this task particularly challenging. This paper is structured as follows: in the Section sec:Related work we review the most significant related work on hate speech detection and zero-shot approaches leveraging LLMs. In the Section sec:Methodology we describe our methodology for the dataset creation and the implementation of zero-shot tasks. In Sections tab:result and sec:Limitations-future works we respectively report results, analysis and main limitations of this work. Finally, in the last Section sec:Conclusion we draw conclusions of the current research.
This paper presents the first attempt to specifically address the detection of slur reappropriation in the Italian language. One of the reasons that motivated us to undertake this task is the need to ensure a safe linguistic environment on social networks without risking the censorship of individual freedom of expression. Since there was no existing dataset to explore homophobic slurs in the Italian language, we filtered a pre-existing homotransphobic dataset to build a subset containing only tweets with slurs occurrences, used both abusively and non-abusively. We then designed precise new guidelines and annotated the filtered subset, focusing on the presence of slur semantic reappropriation. With the newly annotated dataset, we approached a classification task using LLMs with zero-shot techniques. Leveraging the Qwen model AUTHOR, we proposed four different prompts. As suggested by previous literature, more specific prompts and those better suited to the dataset yielded better performance. In this work, we proposed an important and under-explored task through a two-fold contribution. On one hand, we highlighted the lack of data in the Italian language dealing with this phenomenon and the necessity of building an up-to-date corpus that comprehensively includes multiple sources and semantic contexts. On the other hand, we demonstrated a possible approach by leveraging new state-of-the-art LLMs. Finally, it is important to have in mind that compared to English, Italian has a different history and cultural background, resulting in a much slower linguistic evolution. This makes establishing precise characteristics of this topic a challenging task due to the lack of solid foundational knowledge. In conclusion, we believe that bringing attention to the issue will lead to anti-discrimination activities, the creation of safer spaces in online communication, and the inclusion and acceptance of LGBT+ communities. \clearpage \appendix
6
Sentiment, Emotion, Irony, Hate
674_2024
2,024
Alessandro Lento, Andrea Nadalini, Nadia Khlif, Vito Pirrelli, Claudia Marzi, Marcello Ferro
Comparative Evaluation of Computational Models Predicting Eye Fixation Patterns During Reading: Insights from Transformers and Simpler Architectures
ENG
6
2
0
CNR-ILC, Università Campus Bio-Medico, University Mohammed First
3
1
0
1
Nadia Khlif
0
0
Italy, Morocco
Pisa, Rome, Oujda
Eye tracking data during reading provides significant insights into the cognitive processes underlying language comprehension. It allows for the estimation of lexical, contextual, and higher-level structural effects on word identification through metrics such as fixation duration. Despite advancements in psycholinguistic experiments that have elucidated these effects, the extent to which computational models can predict gaze patterns remains unclear. Recent developments in computational modeling, particularly the use of pre-trained transformer language models, have shown promising results in mirroring human reading behaviors. However, previous studies have not adequately compared these models to alternative architectures or considered various input features comprehensively. This paper addresses these gaps by replicating prior findings on English data, critically evaluating performance metrics, and proposing a stricter accuracy measurement method. Furthermore, it compares different computational models, demonstrating that simpler architectures can achieve results comparable to or better than transformers. The study also emphasizes the significance of individual differences in reading behavior, presenting challenges for simulating natural reading tasks.
Eye-tracking data recorded while reading connected text offers valuable insights into the cognitive processes involved in language comprehension. For example, by looking at fixation duration it is possible to estimate the effects that lexical properties (e.g. length, frequencies, orthographic similarity AUTHOR AUTHOR), contextual constraints (e.g. predictability AUTHOR) and higher-level structures (e.g. syntactic structure or prosodic contour AUTHOR) can exert on word identification. While psycholinguistic experiments have reliably assessed how such effects modulate - either linearly or non-linearly - reading times, it is not clear to which extent computational models can directly predict behavioral metrics such as gaze patterns. Over the past 30 years, research in this field has advanced significantly, leading to the development of sophisticated architectures that can account for many aspects of eye movement behavior during reading (e.g. EZ-Reader AUTHOR, Swift AUTHOR). A great boost in such improvements came from the collection and release of eye-tracking corpora (e.g. Geco AUTHOR, Zuco AUTHOR, Meco AUTHOR) which allowed for deep learning techniques to be applied in prediction tasks of eye tracking metrics. Recent work AUTHOR used fine-tuned pre-trained transformer language models achieving remarkably accurate predictions of a wide range of eye-tracking features related to both early and late stages of lexical access. According to such results, these models would accurately mirror the average human reading behavior, suggesting that transformers inherently encode the relative importance of language elements similarly to human cognitive processing mechanisms. However, the authors did not make a direct comparison with alternative computational architectures (e.g. recurrent neural network) and alternative input features (e.g. word length, frequency, predictability, part-of-speech among others), but rather referred to previous literature to support their claim. In the current paper, we directly addressed this issue by replicating the results from AUTHOR on the English data. We critically evaluated the metrics used to assess models' performance and provided an alternative, stricter way to measure accuracy. Next, we tested a comprehensive set of different models on the same task, showing how simpler architectures can achieve comparable or even better results than transformers. Finally, we discussed the overall results in terms of group-level vs individual-level behavior, highlighting the importance of individual differences and how they represent a tough challenge when running simulations of natural reading tasks.
Transformer-based neural networks appear to reasonably predict fixation probability and first-pass duration of words in human reading of English connected texts. Our present investigation basically supports this conclusion, while providing new evidence on two related questions. Two questions naturally arise in this context. How accurate are transformer-based predictions compared with the best predictions of other neural network classifiers trained on the same task? How cognitively plausible are the mechanisms underpinning this performance? Here, we addressed both questions by testing various models on the task of predicting human reading measurements from the GECO corpus, using different evaluation metrics and regressing network predictions on a few linguistic factors that are known to account for human reading behaviour. Our first observation is that assessing a network's performance by looking at its MAE loss function provides a rather gross evaluation of the effective power of a neural network simulating human reading behaviour. A baseline model assigning each token a constant gaze duration that equals the average of all FPD values attested in GECO achieves a 95.7\% loss-based accuracy on both test and training data.\ That a transformer-based classification scores 97.2\% on the same metric and the same test data cannot be held, as such, as a sign of outstanding performance. In fact, it turns out that the MAE loss function is blind to both the magnitude of a network error, and possible biases in the prediction of very low\slash high target values. Thus, it provides an inflated estimate of a model's accuracy. %, to the extent that even the baseline \textit{const} model reached an accuracy of 95\%. We suggest that binary evaluation metrics, based on a fixed threshold partially overcome these limitations. Yet, as single word fixation times typically range between tens to hundreds of milliseconds, application of a fixed threshold will differently affect tokens with different fixation times. We conclude that a relative threshold based on each word's fixation time is a fairer way to measure prediction accuracy. Clearly, this comes at a cost. When assessed with a relative threshold, the accuracy of a transformer-based architecture on test data drops from 70\% down to 57.8\%. It turned out that all other network models tested for the present purposes showed accuracy levels that are comparable to the accuracy of a transformer-based architecture. Since the former are trained on a more restricted set of lexical and contextual input features than the latter, this seems to suggest that word embeddings are of limited use in the task at hand. Although fine-tuned word embeddings actually appear to score much higher on training data (even using accT and accS), we observe that this is due to data overfitting, as clearly shown by the considerably poorer performance of the fine-tuned model on test data. An analysis of the psychometric plausibility of the gaze patterns simulated with our neural models reveals that a relatively small set of linguistic factors that are known to account for a sizeable amount of variance in human fixation times can also account for the bulk of variance in models' behaviour. This is relatively unsurprising, as most of these models were trained on input features that encode at least some of these factors. Nonetheless, we believe that the result is interesting for at least two reasons. First, it shows a promising convergence between computational metrics of model accuracy and quantitative models of psychometric assessment.\ Secondly, it suggests that one can gain non trivial insights in a model's behaviour by analysing to what extent the behaviour is sensitive to the same linguistic factors human readers are known to be sensitive to. On the one hand, this is a step towards understanding what information a neural model is actually learning and putting to use for the task. On the other hand, this is instrumental in developing better models, as it shows what type of input information is more needed to successfully carry out a task, at least if one is trying to simulate the way the same task is carried out by speakers. In the end, it may well be the case that a 70\% fixed-threshold accuracy in simulating average gaze patterns in human reading is not as disappointing as it might seem. Given the wide variability in human reading behaviour (and even in a single reader when confronted with different texts), a considerable amount of variance in our data may simply be accounted for by by-subject (or by-token) random effects.
1
Language Models
675_2024
2,024
Veronica Mangiaterra, Chiara Barattieri di San Pietro, Valentina Bambini
Temporal word embeddings in the study of metaphor change over time and across genres: a proof-of-concept study on English
ENG
3
3
1
IUSS
1
0
0
0
0
0
0
Italy
Pavia
Temporal word embeddings have been successfully employed in semantic change research to identify and trace shifts in the meaning of words. In a previous work, we developed an approach to study the diachrony of complex expressions, namely literary metaphors. Capitalizing on the evidence that measures of semantic similarity between the two terms of a metaphor approximate human judgments of the difficulty of the expression, we used time-locked measures of similarity to reconstruct the evolution of processing costs of literary metaphors over the past two centuries. In this work, we extend this approach previously used on Italian literary metaphors and we present a proof-of-concept study testing its crosslinguistic applicability on a set of 19th-century English literary metaphors. Our results show that the processing costs of metaphors changed as a function of textual genre but not of epoch: cosine similarity between the two terms of literary metaphors is higher in literary compared to nonliterary texts, and this difference is stable across epochs. Furthermore, we show that, depending on the metaphor structure, the difference between genres is affected by word-level variables, such as the frequency of the metaphor’s vehicle and the stability of the meaning of both topic and vehicle. In a broader perspective, general considerations can be drawn about the history of literary and nonliterary English language and the semantic change of words.
Does the metaphor “The wind is a wrestler” convey the same feeling today, as it did in the 1888 when Gerard Manley Hopkins used it in the poem “That nature is a Heraclitean Fire and of the comfort of the Resurrection” AUTHOR? The answer to this question is not trivial: human languages evolve constantly, alongside with the society in which they are used, so much so that the concepts associated with each word, as well as their semantic as- sociations with other words, have changed to different degrees AUTHOR. Studies on lexical semantic change have a long tradition AUTHOR but, with the increasing availability of historical language data and the development of new digital tools, they radically opened up to new approaches coming from computational linguistics and distributional semantics AUTHOR. In the diachronic declination of the Distributional Hypothesis AUTHOR, it is said that changes in the contexts in which a word occurs over time may reveal a change in meaning AUTHOR. Operatively, this means that by training vector space models on historical text corpora from different epochs, it is possible to create time-locked representations of words: if the meaning of a word changed over time, its vectorial representation at t_{1} will be different from its vectorial representation at time t_{2}; conversely, if the two vectors of the same word at t_{1} and t_{2} are in close proximity, the meaning of the word has remained stable. Comparing words vectors diachronically, however, is not effortless and requires the temporal vector space models to be aligned. Alignment is a crucial step in diachronic distributional semantics and it has been tackled by different approaches AUTHOR. Previous studies employing temporal embeddings have found that more frequent words change slower than less frequent words, and that polysemous words change faster than monosemous words AUTHOR, while synonyms tend to change meaning comparably AUTHOR. However, temporal word embeddings have been mostly applied to the study of the semantic change of single words and only marginally to complex linguistic expressions leaving the field with a knowledge gap on the evolution of meaning of a widespread linguistic and textual phenomenon such as, for instance, metaphors. Within the theoretical framework of Relevance Theory AUTHOR, metaphors are non-literal uses of language involving a conceptual adjustment described as context-driven broadening of lexically denoted meaning of words. In terms of linguistic structure, metaphors normally involve two terms, the topic and the vehicle: for example, in the metaphor ‘Sally is a chameleon’, the topic Sally is described by the broadened vehicle chameleon, to indicate a person who changes attitude/behavior to fit their surroundings. While metaphors are broadly used in everyday communication, they are certainly a distinctive feature of literary texts, as long evidenced in stylistics AUTHOR. Past studies on literary metaphors, however, report mixed results. The rating study by Katz et al. AUTHOR found no difference between literary and everyday metaphors, while other studies showed that the former type is less familiar and more open-ended than the latter AUTHOR, but literary metaphors are rated as less difficult and more familiar when presented together with their original context AUTHOR. Moreover, the processing of literay metaphors seems to be particularly effortful, given the multitude of possible meanings they evoke AUTHOR. Therefore, open questions remain regarding how literary metaphors are processed. It must be also underlined that the literary metaphors used in previous studies were written tens or hundreds of years ago. Yet, the effect of this diachronic dimension on their processing costs, as well as its interplay with textual genre in which metaphors are embedded, remains an open question. In addition to its diachronic application, the use of vector space models can help characterize metaphors thanks to the ability of these models to approximate human performance in psycholinguistic tasks. Measures derived from vector space models were shown to be able to approximate how humans process word meaning AUTHOR and, more specifically to correlate with how humans perceive metaphorical expressions in terms of metaphoricity, difficulty, and other psycholinguistic dimensions AUTHOR. In particular, semantic similarity, operationalized in vector space models as cosine similarity (CS) between topic and vehicle, has long been considered relevant for metaphor studies AUTHOR and, more recently, for automatic metaphor identification AUTHOR. In a previous study on Italian AUTHOR, we developed a novel method, employing the Temporal Word Embeddings with a Compass (TWEC) model AUTHOR as training procedure, to capture the temporal dynamics of literary metaphors. This method combines the computational models’ abilities to approximate human judgments and their diachronic applications, allowing to track the diachronic evolution of how literary metaphors are perceived by readers over the course of 200 years. In the present proof-of-concept study, we apply this approach to English, to test its crosslinguistic applicability and whether it can provide language-specific insights into the evolution of metaphors. We take the similarity between the topic and vehicle of a metaphor as a proxy for its difficulty and we analyze how it varies across time and textual genres. We also consider the role of word frequency (WF) and vector coherence (VC), two widely used measures in the study of semantic change AUTHOR, as well as semantic neighborhood density (SND) in shaping the difficulty of the expression. WF and VC were considered to assess the effect of the semantic change of the single word on the evolution of whole metaphor understanding, while SND was considered to analyze the impact of a measure known to synchronically impacts metaphor understanding AUTHOR on its diachronic unfolding.
In this proof-of-concept study, we characterized the temporal dynamics of a set of English literary metaphors to understand whether their processing costs changed over time. We also explored if this change was affected by the genre of the texts, as well as by the semantic properties of the constituting elements of the metaphors (topic and vehicle). By leveraging on the diachronic applications of distributional semantics and extending a method already applied to the study of Italian literary metaphors AUTHOR, we created a series of time-locked semantic representations of 139 English metaphors, from which we derived a measure of the cosine similarity between their terms (CS), taken as a proxy of their difficulty, together with semantic neighborhood density (SND), stability over time (VC), and, from four diachronic corpora, frequency (WF) of their topics and vehicles. Results showed no effect of epoch for either ‘A is B’ or ‘A of B’ literary metaphors. Thus, no noticeable change in CS over time was revealed, suggesting that these metaphors come with similar processing costs for contemporary readers and for readers of the epoch in which the metaphors were created. The absence of an effect of epoch can be better understood by considering the historical evolution of the English language, and specifically its early standardization. As stated by Wyld AUTHOR, literary writing as early as the 18th century was considered `English of our own age in all its essentials’. In line with this consideration, our results point to the stability of the main stylistic features of the English language in the last two centuries, including those related to metaphors. While literary metaphors are not processed differently based on the epoch, the influence of textual genre is noticeable. This factor emerged both as a main effect and in different interaction patterns with single-word variables, varying according to the type of metaphor. For `A of B’ metaphors, results revealed that the difficulty of these metaphors changed as a function of the genre. In particular, they are perceived as less difficult when found in literary contexts, compared to when encountered in nonliterary texts. Hence, the difficulty of these metaphors is sensitive to the style of the text in which metaphors are found: when read in a text that has a literary style and aesthetic intent, the metaphor is less striking than the same metaphor in a nonliterary text. Moreover, we found a strong effect of the stability of the meaning of the vehicle in interaction with epoch and genre. This suggests that `A of B’ metaphors with more unstable vehicles are perceived as less difficult than `A of B’ metaphors with vehicles whose meanings remained stable over time. We interpreted this result in light of Traugott's AUTHOR theory of metaphorization, according to which the metaphorical use of a word can become one of its stable meanings. In the context of the present study, words that changed the most could have done so by incorporating meanings derived from their metaphorical uses. As a result, when these unstable and broadened vehicles are used, metaphors appear less difficult. The reader does not need to broaden the concept expressed by the vehicle to interpret the metaphor, because the metaphorical nuances have entered the standard meaning of the word. From a qualitative observation of the data, we can notice, for instance, that a metaphor such as “Wave of horror”, where the vehicle wave incorporated the meaning of ‘sudden increase in a particular phenomenon’, is perceived as less metaphorical than “Clouds of doubt”, whose vehicle clouds has maintained its original meaning. For ‘A is B’ metaphors, the statistical model highlighted an effect of the frequency of the vehicle in interaction with epoch and genre. In nonliterary texts, the perceived difficulty of `A is B’ metaphors varied as a function of the word frequency (WF) of their vehicle, resulting in opposite patterns in the past and present. In the 19th century, metaphors with less frequent vehicles were perceived as more metaphorical. Conversely, in the 21st century, the less frequent the vehicle, the less metaphorical the metaphor. This 19th-century pattern aligns with prior studies \cite{doi:10.1080/10926488.2018.1434944} showing that metaphors with less frequent vehicles communicate novel information about the topic, thereby enhancing their perceived metaphorical nature. For instance, Hopkins’ metaphor “The wind is a wrestler” featured the vehicle wrestler, a low-frequency word in the 19th century, which conveyed novelty about the topic wind, rendering the metaphor more metaphorical and conceptually challenging. However, this same metaphor is perceived differently today as wrestler has become more frequent, reducing its novelty and metaphoricity for 21st-century readers. Overall, our results suggest that in English, metaphor processing costs are not influenced by the temporal distance between their creation (in the 19th century) and their processing by contemporary readers. Instead, the primary modulator of metaphor processing costs appears to be the textual genre in which they appear. This modulation varies depending on the syntactic structure of metaphors and interacts with single-word measures. Distinct patterns emerged for A of B' and A is B' structures in determining metaphor difficulty. For A of B' metaphors, the main effects of genre and vector coherence, interacting with epoch and genre, were observed. For A is B' metaphors, diachronic variations in processing costs were linked to the interaction of word frequency with epoch and genre. While these differences might reflect genuine syntactic effects on metaphorical predication \cite{bambini2013differentiating, carston2023metaphor, tonini2023relationship}, the imbalance in the size of the two metaphor sets might obscure some effects in the less represented `A is B' structure. Further studies are necessary to explore diachronic changes in processing across structural differences more comprehensively. In conclusion, this proof-of-concept study adapted a method using temporal word embeddings from Italian to English to investigate metaphor evolution. This approach demonstrated that English literary metaphor processing costs remain stable over time (unlike Italian) but are dynamically modulated by stylistic features and single-word measures. The method's sensitivity to language-specific characteristics underscores its crosslinguistic applicability.
7
Lexical and Semantic Resources and Analysis
676_2024
2,024
Ilaria Chizzoni, Alessandro Vietti
Towards an ASR system for documenting endangered languages: a preliminary study on Sardinian
ENG
2
1
1
Libera Università di Bolzano
1
0
0
0
0
0
0
Italy
Bolzano
Speech recognition systems are still highly dependent on textual orthographic resources, posing a challenge for low-resource languages. Recent research leverages self-supervised learning of unlabeled data or employs multilingual models pre-trained on high resource languages for fine-tuning on the target low-resource language. These are effective approaches when the target language has a shared writing tradition, but when we are confronted with mainly spoken languages, being them endangered minority languages, dialects, or regional varieties, other than labeled data, we lack a shared metric to assess speech recognition performance. We first provide a research background on ASR for low-resource languages and describe the specific linguistic situation of Campidanese Sardinian, we then evaluate five multilingual ASR models using traditional evaluation metrics and an exploratory linguistic analysis. The paper addresses key challenges in developing a tool for researchers to document and analyze the phonetics and phonology of spoken (endangered) languages.
The growing interest in understudied languages has led to categorizing them on the basis of resource availability, defining them as high, low, or zero-resource languages. In the narrowest sense, zero and low-resource languages are those lacking sufficient data to train statistical and machine learning models AUTHOR AUTHOR AUTHOR. However, such a technical definition is not adequate to account for the different linguistic scenarios of world languages. As a matter of fact, in the literature, the term low and zero resource languages is still used inconsistently. Sometimes, it is used to describe standard, widely spoken languages with a shared orthography, that cannot rely on many hours of transcribed or annotated speech, see Afrikaans, Icelandic, and Swahili in AUTHOR. Sometimes, it is used for non-standard, widely spoken languages, lacking a shared orthography (no orthography or multiple proposed orthographies) as for Swiss German dialects AUTHOR or Nasal and Besemah AUTHOR. And sometimes to refer to non-standard, endangered languages lacking a shared orthography, like Bribri, Mi’kmaq and Veps AUTHOR. These scenarios are mainly being addressed with two approaches: The first leverages self-supervised learning, and uses unlabeled data from the target language to learn linguistic structures AUTHOR. Self-supervised learning is an optimal choice in low-resource settings because only requires to gather more audio data. However, it seems costly and prone to catastrophic forgetting AUTHOR AUTHOR. The second approach involves training a single multilingual model on labeled data from highly-resourced languages and then applying the trained model to transcribe unseen target languages. This includes the benefits of a supervised learning setting and proved to be effective AUTHOR. Pre-trained multilingual models can then be fine-tuned on just a smaller dataset of labeled data in the target language. Since fine-tuning is a straightforward, efficient approach, it is the preferred one to address the problem of low-resource languages AUTHOR. However, the success of this approach still depends on the amount of available labeled data in the target language or whether or not it is possible to generate more, e.g., via data augmentation. Several data augmentation approaches for low-resource languages are currently being explored, including self-learning AUTHOR, text-to-speech (TTS) AUTHOR or optimized dataset creation approaches AUTHOR. Bartelds and colleagues AUTHOR propose data augmentation techniques to develop ASR for minority languages, regional languages or dialects. They employ a self-training method on Besemah and Nasal two Austronesian languages spoken in Indonesia. In self-training, a teacher XLS-R model is fine-tuned on manually transcribed data, the teacher model is used to transcribe unlabeled speech and then a student model is fine-tuned on the combined datasets of manually and automatically transcribed data. Since the collected 4 hours of manually transcribed speech for Besemah and Nasal followed different orthography conventions, the transcriptions were first normalized to working orthographies and then used for fine-tuning. In the same framework, they leveraged a pre-existing TTS system available for Gronings, a Low-Saxon language variant spoken in the province of Groningen in the Netherlands, to generate more synthetic training data from textual sources and they achieved great results AUTHOR. While fine-tuning paired with data augmentation techniques works for low-resource, widely-spoken languages, developing a speech recognition system for endangered spoken languages also involves ethical considerations towards the local community. More participatory research is required to understand the native speakers' relationship with the written form of their language, as well as with language technologies. In their position paper AUTHOR Liu and colleagues emphasize the importance of creating language technologies in consultation with speakers, activists, and community language workers. They present a case study on Cayuga, an endangered indigenous language of Canada with approximately 50 native elder speakers and an increasing number of young L2 speakers. After gaining insights from the community, they began collaborating on a morphological parser. This tool aids teachers and young L2 students in language learning while gradually providing morphological annotations and segmentations useful for developing ASR systems for researchers. Blaschke and colleagues AUTHOR surveyed over 327 native speakers of German dialects and regional varieties, finding that respondents prefer tools that process speech over text and favor language technology that handles dialect speech input rather than output. Understanding the needs of the speech community and differentiating them from those of linguistic researchers can guide research more effectively. This paper outlines the first steps towards a speech recognition system for researchers to aid the systematic analysis of the phonetics and phonology of Campidanese, an endangered language spoken in southern Sardinia. To achieve this goal, we first describe the situation of the speech community of the target language, we then select five speech recognition multilingual and ready for inference models and evaluate them on Campidanese Sardinian. When multilingual models were not available for speech recognition task, we chose multilingual models fine-tuned on Italian, which we assume to be a relatively close language both genealogically and structurally. We assess the goodness of the models' inferences, first by computing the traditional evaluation metrics, i.e., average Word Error Rate (WER) and Character Error Rate (CER), and then carrying out a qualitative linguistic analysis to have better insights of which model best meets the needs for language documentation and research. This work is part of ``New Perspectives on Diphthong Dynamics (DID)'', a joint project between the University of Bozen and the Ludwig-Maximilians-Universität München, focusing on the study of diphthongs dynamics in two understudied languages, i.e., Campidanese Sardinian and Tyrolean and aims to build a corpus for the linguistic documentation of these two languages.
The preliminary analysis carried out in this paper provided insight into how various speech recognition models transcribe data in a Romance language not encountered in the model training. All evaluated models improve their performance as the audio length increases. Best CER values are achieved on audio of read speech longer than 20 seconds. However, short audios of spontaneous speech with an average length of 5.3 seconds achieved a remarkably low CER, meaning better precision compared to the similarly short (3.5 seconds) read speech chunks. These results suggest that speech style might also play a role. To investigate whether the models are sensitive to speech style, other linguistic, speaker-specific, or technical variables, such as the topic, age, gender of the speaker, or the acoustic quality of the audio data, should be taken into account. For example, both datasets of spontaneous speech are produced by males over 45, and models might be biased toward an adult male speaker profile. For the time being, we attribute it to the poor representativeness of the dataset and will investigate it in future work. A controlled yet diverse dataset facilitated a qualitative linguistic analysis of the predictions. Interestingly, some models seem to follow the phonotactic constraints of the languages they have been trained on, but at the same time they generalize well to unfamiliar languages, providing quite accurate phonetically-like orthographic transcription of Campidanese Sardinian. These initial considerations should be validated with tests on a larger corpus to eliminate data bias and a more systematic linguistic analysis to avoid cherry-picking. We also plan to look in detail at the speech recognition models' architectures in order to make a informed choice at the fine-tuning phase. In conclusion, it seems that state-of-the-art transcription models, especially multilingual ones, produce a phonetically accurate orthographic transcription of Campidanese Sardinian and thus provide a promising basis for fine-tuning. Specifically, Wav2Vec2 large XLSR-53 and STT Multilingual FastConformer Hybrid proved to be the best models according to the evaluation metrics and preliminary linguistic analysis. STT Multilingual FastConformer Hybrid was the best and most efficient in terms of computational resources, which makes it our first choice for further testing and fine-tuning. However, it is worth noticing, speech recognition systems with orthographic output can be costly in terms of human and computational resources, poorly informative for speech researchers and uninteresting to native speakers; whereas recent work on multilingual automatic phonemic recognition seems a viable alternative worth exploring for documenting endangered spoken languages. Work funded by the New Perspectives on Diphthong Dynamics (DID) project \#I83C22000390005. We would like to extend our gratitude to Daniela Mereu for providing the essential data for this research and for her invaluable perspective. We also thank Loredana Schettino and Aleese Block for their support and helpful insights.
13
Multimodal
677_2024
2,024
Emanuele Brugnoli, Donald Ruggiero Lo Sardo
Community-based Stance Detection
ENG
2
2
0
Sony Computer Science Laboratories Rome, Centro Studi e Ricerche Enrico Fermi (CREF), Sapienza Università di Roma
3
0
0
0
0
2
Emanuele Brugnoli, Donald Ruggiero Lo Sardo
Italy
Rome
Stance detection is a critical task in understanding the alignment or opposition of statements within social discourse. In this study, we present a novel stance detection model that labels claim-perspective pairs as either aligned or opposed. The primary innovation of our work lies in our training technique, which leverages social network data from X (formerly Twitter). Our dataset comprises tweets from opinion leaders, political entities and news outlets, along with their followers' interactions through retweets and quotes. By reconstructing politically aligned communities based on retweet interactions, treated as endorsements, we check these communities against common knowledge representations of the political landscape. Our training dataset consists of tweet/quote pairs where the tweet comes from a political entity and the quote either originates from a follower who exclusively retweets that political entity (treated as aligned) or from a user who exclusively retweets a political entity from an opposing ideological community (treated as opposed). This curated subset is used to train an Italian language model based on the RoBERTa architecture, achieving an accuracy of approximately 85%. We then apply our model to label all tweet/quote pairs in the dataset, analyzing its out-of-sample predictions. This work not only demonstrates the efficacy of our stance detection model but also highlights the utility of social network structures in training robust NLP models. Our approach offers a scalable and accurate method for understanding political discourse and the alignment of social media statements.
Stance detection is a critical task within the domain of natural language processing (NLP). It involves identifying the position or attitude expressed in a piece of text towards a specific topic, claim, or entity AUTHOR. Traditionally, stances are classified into three primary categories: favor, against, and neutral. This classification enables a detailed description of textual data, facilitating a deeper insight into public opinion and discourse dynamics. In recent years, the proliferation of digital communication platforms such as social media, forums, and online news outlets has resulted in an unprecedented volume of user-generated content. This surge underscores the necessity for automated systems capable of efficiently analyzing and interpreting these vast text corpora. Stance detection addresses this need by providing tools that can systematically assess opinions and reactions embedded within texts, thus offering valuable applications across various fields including social media analysis AUTHOR, search engines AUTHOR, and linguistics AUTHOR. According to the last report of World Economic Forum AUTHOR, the increase in societal polarization features among the top three risks for democratic societies. While a macroscopic increase of polarization has been observed, an understanding of the microscopic pathways though which it develops is still an open field of research. Through stance detection it would be possible to reconstruct these pathways down to the individual text-comment pairs. Stance detection, has been explored across various fields with differing definitions and applications. Du Bois introduces the concept of the stance triangle, where stance-taking involves evaluating objects, positioning subjects, and aligning with others in dialogic interactions, emphasizing the sociocognitive aspects and intersubjectivity in discourse AUTHOR. Sayah and Hashemi focus on academic writing, analyzing stance and engagement features like hedges, self-mention, and appeals to shared knowledge to understand communicative styles and interpersonal strategies AUTHOR. Küçük and Can define stance detection as the classification of an author's position towards a target (favor, against, or neutral), highlighting its importance in sentiment analysis, misinformation detection, and argument mining AUTHOR. These diverse approaches underscore the multifaceted nature of stance detection and its applications in enhancing the understanding of social discourse, academic rhetoric, and online content analysis. For a review of the recent developments of the field we refer to Alturayeif et al. AUTHOR and AlDayel et al. AUTHOR. In this work, we propose a novel approach to training stance detection models by leveraging the interactions within highly polarized communities. Our method utilizes tweet/quote pairs from the Italian political debate to construct a robust training set. We operate under the assumption that users who predominantly retweet a particular political profile are likely in agreement with the statements made by that profile. We restricted our analysis to retweet since this form of communication primarily aligns with the endorsement hypothesis AUTHOR. Namely, being a simple re-posting of a tweet, retweeting is commonly thought to express agreement with the claim of the tweet AUTHOR. Further, though retweets might be used with other purposes such as those described by Marsili AUTHOR, the repeated nature of the interaction we observe in our networks reduces the probability that the activity falls outside of the endorsement behavior. Conversely, while quoting a tweet works similarly to retweeting, the function allows users to add their own comments above the tweet. This makes this form of communication controversial regarding the endorsement hypothesis, as agreement or disagreement with the tweet depends on the stance of the added comment. On the other hand, the information social media users see, consume, and share through their news feed heavily depends on the political leaning of their early connections AUTHOR. In other words, while algorithms are highly influential in determining what people see and shaping their on-platform experiences AUTHOR, there is significant ideological segregation in political news exposure AUTHOR. It is therefore reasonable to expect that users who almost exclusively retweet a political entity (party, leader, or both) use quote tweets to express agreement with statements posted by that entity and disagreement with statements posted by political entities ideologically distant from their preferred one. Additionally, the quote interaction perfectly encapsulates the stance triangle described by Du Bois AUTHOR. In order to correctly assess political opposition we construct a retweet network and use the Louvain community detection algorithm AUTHOR to characterize leaders and, through label propagation, the followers that align with their views. Through these community labels we construct a dataset of claim-perspective couples by annotating tweet-quote pairs from profiles that clearly express political alignment as favor and annotating tweet-quote pairs in which the profiles come from different communities as against. Finally, we use a pretrained BERT model for Italian language and fine-tune it to the classification task. This methodology aims to enhance the accuracy of stance detection models by incorporating real-world patterns of agreement and disagreement observed in polarized online environments. Further, it enables an unsupervised training paradigm that can be scaled to very large datasets. In the following sections, we will outline the data gathering approach used for the dataset. Subsequently, we will describe the community detection methods employed to identify leaders and users within the Italian political discourse. We will then discuss the model architecture and its training process. In the results section, we will evaluate the model's performance and present our findings. Finally, the conclusion will address potential future developments, the implications of our work, and its limitations.
This study introduces a novel stance detection model that significantly advances the understanding of alignment and opposition in social discourse. By leveraging social network data from X (formerly Twitter), we developed a robust training technique that utilizes interactions within politically aligned communities. Our approach involved curating a dataset of tweet/quote pairs, where the quotes are derived from users' interactions with leaders and politicians. This dataset facilitated the training of a BERT model, which achieved a state of the art accuracy of approximately 85%. Our findings underscore the efficacy of using social network structures to train NLP models, demonstrating that retweet interactions can serve as reliable indicators of political alignment. This methodology not only enhances the scalability of stance detection but also offers a nuanced understanding of political discourse on social media platforms. By reconstructing and validating politically aligned communities through expert knowledge, our model provides a robust framework for analyzing the alignment of social media statements. The implications of this work extend beyond stance detection, offering potential applications in monitoring political sentiment, identifying misinformation, and understanding public opinion dynamics. Future research could explore the integration of additional social network features and exploring the capacity of the model to generalize to other domains, interaction types and understanding how stance propagates within networks. Additionally, investigating the role of specific linguistic markers like adverbs across different languages and cultures can reveal universal and language-specific determinants of stance. While our model shows promising results, it also relies heavily on the assumption that retweets are mainly a form of endorsement, and that quotes within one's own political community are all in agreement and that outside of one's political community they are all in disagreement. While the high level of polarization observed in these networks support the validity of these assumptions, it also restricts the applicability of the model to domains where polarization is evident and these assumptions are valid. We extend our deepest gratitude to Vittorio Loreto, the director of the Sony Computer Science Laboratories (CSL) and Professor at La Sapienza University of Rome, for his invaluable support and sponsorship of this research. His guidance was pivotal for the successful completion of our study. We also thank the anonymous reviewers for their insightful suggestions, which have greatly contributed to enhancing the quality of this work. \appendix
6
Sentiment, Emotion, Irony, Hate
678_2024
2,024
Marco Saioni, Cristina Giannone
Multimodal Attention is all you need
ENG
2
1
0
Università Guglielmo Marconi, Almawave
2
0
0
0
0
1
Cristina Giannone
Italy
Rome
In this paper, we present a multimodal model for classifying fake news. The main peculiarity of the proposed model is the cross attention mechanism. Cross-attention is an evolution of the attention mechanism that allows the model to examine intermodal relationships to better understand information from different modalities, enabling it to simultaneously focus on the relevant parts of the data extracted from each. We tested the model using textit{MULTI-Fake-DetectiVE} data from Evalita 2023. The presented model is particularly effective in both the tasks of classifying fake news and evaluating the intermodal relationship.
Internet has facilitated communication by enabling rapid, immersive information exchanges. However, it is also increasingly used to convey falsehoods, so today, more than ever, the rapid spread of fake news can have severe consequences, from inciting hatred to influencing financial markets or the progress of political elections to endangering world security. For this reason, mitigating the growing spread of fake news on the web has become a significant challenge. Fake news manifests itself on the internet through text, images, video, audio, or, in general, a combination of these modalities, which is a multimodal way. In this article, we took the two, text and image, components of news as it proposed, for instance, in a social network. In this work we proposed an approach to automatically and promptly identify fake news. We use the dataset MULTI-Fake-DetectiVE{https://sites.google.com/unipi.it/multi-fake-detective}} competition, proposed in EVALITA 2023{https://www.evalita.it} AUTHOR}. The competition aims to evaluate the truthfulness of news that combines text and images, an aim expressed through two tasks: the first, which carries out the identification of fake news (Multimodal Fake News Detection); the second, which seeks relationships between the two modalities text and image by observing the presence or absence of correlation or mutual implication (Cross-modal relations in Fake and Real News). Our approach proposes a Transformer-based model that focuses on relating the textual and visual embeddings of the input samples (i.e., the vector representations of the text and images it receives as input). The aim was to find a way to reconcile the two different representation embeddings because they are learned separately from two different corpora, such as text and images, trying to capture their mutual relationships through some interaction between the respective semantic spaces. The remainder of the paper is structured as follows: Section 2 presents a brief overview of related work, and section 3 describes the architecture of the proposed model. Section 4 discusses the dataset supplied by the Evalita tasks, followed by an overview of our experiments in section 5. Sections 6 and 7 present the final results and our conclusions, respectively.
The Internet has facilitated the multimodality of communication by enabling rapid information exchanges that are increasingly immersive but increasingly used to convey falsehoods. In this study, a multimodal model for identifying fake news was proposed which is based on the mechanism of cross attention between the representations of the features learned by the network on the textual component of the news and those learned on the visual component associated with it. Many multimodal models are based on the concatenation of features learned from distinct modalities which, despite having good performance, however, limit the potential of the interaction between the features themselves. From the experiments carried out, the use of cross-attention demonstrated significant improvements in the performance of the model proposed in this work compared to the first two models classified in the MULTI-Fake-DetectiVE competition for both tasks requested by the organizers, despite the dataset available for training is very small in size and unbalanced both with respect to the categories to be predicted and with respect to the source of the news. Despite the intrinsic complexity of the two tasks, the cross-layer of the proposed model manages to express the representations learned from the text and images of a news story in a harmonious, collaborative and synergistic way, balancing their contributions and preventing one from taking over the other. Future developments concern the components of the model which could use a Visual Transformer AUTHOR instead of the ResNet in order to relate textual embeddings and visuals both generated by training a Transformer network.
15
Fact Checking and Fake News Detection
679_2024
2,024
Martina Saccomando, Andrea Zaninello, Francesca Masini
Morphological vs. Lexical Antonyms in Italian: a Computational Study on Lexical Competition
ENG
3
2
1
Università di Bologna, Fondazione Bruno Kessler, Libera Università di Bolzano
3
0
0
0
0
0
0
Italy
Bologna, Bolzano
In this paper, we examine the competition between pairs of adjectives in Italian that are antonyms of the same term: one is a ``morphological antonym'' formed by negative prefixation, the other is a ``lexical antonym'' with no morphological relationship with the term in question. We consider pairs of adjectives that are reported as antonyms in lexicographic resources and extract the nouns that can be modified by both adjectives from a large corpus. We select a set of 8 nouns for each pair that present higher, lower, and comparable frequencies combined with each antonym respectively and then we perform two experiments with a LLM. Firstly, we perform experiments for masked-token prediction of the adjective, to study the correlation between prediction accuracy and the frequency of the noun-antonym pair. Secondly, we perform a polarity-flip experiment with a multilingual LLM, asking to change the adjective into its positive counterpart, and study the cases where the antonym is changed to the morphological antonym's lexical base, under the hypothesis that a flip to the lexical base indicates a narrower set of senses of the antonymic counterpart.
Antonymy is the semantic relationship between terms with opposite meanings. In their canonical form, two antonyms' meanings can be represented as the poles of a semantic continuum where one term has a ``positive" semantic value, the other a ``negative" one AUTHOR. Italian, given a word (e.g., the adjective felice `happy'), antonyms can either be realized via prefixation of that word (e.g. infelice `unhappy') or through an independent lexeme (e.g. triste `sad'). In our work, we refer to these types of antonyms as morphological antonym and lexical antonym, respectively. A word in the lexicon may have both a morphological and a lexical antonym, only one of them, or neither. In this paper, we are interested in triplets of adjectives where a positive adjective (e.g. felice) presents two possible antonyms (or ``co-antonyms''), one formed morphologically by prefixation (e.g. infelice), one morphologically independent (e.g. triste). In this paper, we are interested in studying the factors that govern the selection of the morphological antonym vs. the lexical one. These two types of antonyms express ``negative" semantics with respect to the opposite, ``positive'' term in different ways: implicitly in the case of lexical antonyms; explicitly in the case of morphological antonyms, by adding a prefix with a negative, contradicting value. Because of their different morphological structure, one possible hypothesis on their lexical competition is that the morphological antonym should have a more restricted semantics, representing the negation of the semantics of its adjectival base, while the lexical antonym should have a broader semantic coverage, as it is morphologically independent from its positive counterpart. To the best of our knowledge, there is no empirical study, especially on Italian, that investigates the competition between morphological and lexical antonyms in single languages. Studies on antonyms do identify the two types of antonyms but generally do not address the factors influencing the preference for one type over the other intralinguistically. This study investigates the competition between these two types of antonyms by firstly studying their distribution in corpora (Section 5.1); secondly, testing the ability of a native-Italian language model to predict them in a masked-token prediction task (Section 5.2); and, finally, performing a substitutability task within the same context by switching the polarity of the context sentence with a SOTA multilingual LLM, in order to study when the adjective is switched to the positive un-prefixed adjective or to another, positive but morphologically unrelated one (Section 5.3).
In our study, we aimed to highlight the differences between the two types of antonyms, morphological and lexical, focusing on a computational account of their context of use. While a lexical analysis did not prove decisive, experiments on masked-token prediction and polarity flip, aimed at approximating their semantic coverage, indicate that, unlike often held in current studies, the lexical antonym seems to posses a narrower lexical coverage and scope, supporting the view (such as AUTHOR's and others') that it is indeed the morphological antonym, despite its apparent closer relationship with its lexical base, that possesses the higher degree of semantic generality. This study has an exploratory nature, as previously mentioned, due to the lack of specific research on the subject. For this reason, a limited number of adjectives were selected, exclusively belonging to the core vocabulary. Additionally, the study focused on the Italian language, not only due to the absence of recent in-depth studies on this language but also because of the general decline in linguistic research on Italian. The results obtained contradicted our initial hypothesis. In the future, these results could be confirmed or entirely overturned if a larger data set were considered. furthermore, it would be interesting to investigate whether the results obtained for Italian are also found in languages with morphology different from that of Italian. In a broader framework of expanding the analysis, it would be interesting to focus on additional linguistic factors that determine the choice of a lexical antonym rather than a morphological one, such as semantic networks or word frequency.
7
Lexical and Semantic Resources and Analysis
680_2024
2,024
Francesca Nannetti, Matteo Di Cristofaro
Understanding the Future Green Workforce through a Corpus of Curricula Vitae from Recent Graduates
ENG
2
1
1
Università di Modena e Reggio Emilia
1
0
0
0
0
0
0
Italy
Modena
In view of the much-heralded ecological transition, to stay competitive and participate in the collective effort to face global warming and climate change, organisations need to select employees interested in and able to develop environmentally sustainable and innovative ideas. The existing literature however does not present consistent nor concordant results on the effective interest, involvement and expertise of Generation Z members – namely, the newest entrants into the workforce – in green issues. This study presents a corpus-assisted methodology to explore the profile of the upcoming workforce expected to present itself to companies. With CVs as one of the first interfaces between candidate and company in the recruitment process, a purpose-built corpus consisting of Curricula Vitae from recent graduates of the University of Modena and Reggio Emilia was collected. Data is investigated through a Corpus-Assisted Discourse Studies (CADS) framework, proposing a novel interaction between structured metadata and textual information. The original contribution of this approach lies in the extraction of information from the narrative structure of CVs which, guiding the evaluation and exploration of metadata, ensures that the knowledge value of the data can be explored in a discursive manner and not reduced to lists of competences and qualifications.
The pursuit of environmentally sustainable growth is now more prominently featured on the global policy agenda than ever before [1], and the efforts to fight climate change and to support transition towards low or net-zero carbon energy systems have manifested over the last decade through the increasing release of international agreements and strategies striving for a more sustainable future [2]. Achieving a successful transition to a more sustainable economy, however, requires not only government intervention policies, but also a new generation workforce [3] that should be composed of individuals able to deal with complex issues and ambiguous situations associated with sustainable development in unpredictable and often rapidly changing circumstances [4]. Consequently, to stay competitive and participate in the collective effort to face global warming and climate change, organisations need to attract, identify, select and attempt to retain individuals interested in and able to develop green and innovative solutions [5]. Even though by 2025 27% of the workforce will be comprised of individuals from Generation Z [6] - namely, those born roughly between the mid-1990s and the early 2010s –, and despite the growing body of research on this topic [7], the existing literature does not present consistent nor concordant results on the effective interest, involvement and expertise of Generation Z in sustainable and environmental issues [8, 9]. Therefore, this study proposes a corpus-assisted methodology to explore the Gen Z members’ profile as the newest entrants into the workforce, particularly considering the need for a large and well-qualified workforce to effectively manage the ecological transition. Given the crucial role played by universities in educating and shaping the next generation of professionals [10], a sample of recent graduate (2022-2023) has been identified as consistent and representative. Moreover, since in the very early stages of the selection process screening applicants’ Curricula Vitae (CVs) is a widely used recruitment practice to shortlist the best candidates [11], CVs constitute the first documented interface between people and companies. Hence, this research is based on a purpose-built corpus [12] consisting of 8,096 Curricula Vitae from students who received a certified title at the University of Modena and Reggio Emilia during the 2022/2023 academic year, collected from the AlmaLaurea database. AlmaLaurea is an interuniversity Consortium representing 82 Italian universities, aimed at facilitating graduates’ access to the job market by helping them to connect with companies. In this regard, one of the main services is the database of students’ Curricula Vitae. Data is investigated through a Corpus-Assisted Discourse Studies (CADS) framework - that “set of studies into the form and/or function of language as communicative discourse which incorporate the use of computerised corpora in their analyses” [13] - serving a novel methodological approach impinging on the interaction between CVs structured metadata and textual information.
The current contribution of this paper is mainly methodological and theoretical, since, starting from a gap in the literature, it proposes to collect and analyse a large number of Curricula Vitae with a novel approach impinging on the underlying narrative dimension of these documents, a procedure that requires to triangulate metadata and textual information, and to make use of both linguistic tools and data science techniques. Despite the reliance on standard approaches, the resulting combination offers both linguists and data scientists a novel perspective on CVs, ensuring that the knowledge value of the data can be explored in a discursive manner and not reduced to lists of competences and qualifications. Preliminary examples show the ability of this method to provide the means to build a profile of the generation described by the data. Additionally, the resulting details may provide interesting insights to companies seeking to engage recent graduates in supporting the ecological transition.
7
Lexical and Semantic Resources and Analysis
681_2024
2,024
Aria Rastegar, Pegah Ramezani
From 'It's All Greek to Me' to 'Nur Bahnhof verstehen': An Investigation of mBERT's Cross-Linguistic Capabilities
ENG
2
2
1
FAU Erlangen-Nuremberg
1
1
1
2
Aria Rastegar, Pegah Ramezani
0
0
Germany
Erlangen
This study investigates the impact of cross-linguistic similarities on idiom representations in mBERT, focusing on English and German idioms categorized by different degrees of similarity. We aim to determine whether different degrees of cross-linguistic similarities significantly affect mBERT's representations and to observe how these representations change across its 12 layers. Contrary to our initial hypothesis, cross-linguistic similarity did not uniformly impact idiom representations across all layers. While early and middle layers showed no significant differences among idiom categories, higher layers (from Layer 8 onwards) revealed more nuanced processing. Specifically, significant differences between the control category and idioms with similar meaning (SM), as well as between idioms with similar lexical items (SL) and those with similar semantics (SM) were observed. Our analysis revealed that early layers provided general representations, while higher layers showed increased differentiation between literal and figurative meanings. This was evidenced by a general decrease in cosine similarities from Layer 5 onwards, with Layer 8 demonstrating the lowest cosine similarities across all categories. Interestingly, a trend suggests that mBERT performs slightly better with more literal hints. The order of cosine similarity for the categorizations was: idioms with a degree of formal similarity, control idioms, idioms with both formal and semantic similarity, and finally idioms with only semantic similarity. These findings indicate that mBERT's processing of idioms evolves significantly across its layers, with cross-linguistic might affect more significantly in higher layers where more abstract semantic processing likely occurs.
Idioms are one of the most studied linguistic concepts that broadly can be defined as multi-word expressions that are often fixed in terms of their syntactic and lexical aspects, while they usually carry meanings that cannot be directly deduced from the meaning of individual words they contain AUTHOR. Given their syntactic and structural fixedness and non-compositional aspects, they were perceived as peripheral, supplementary, or appendixes to language grammars in earlier approaches to idioms \citep[p.\~504]{fillmore1988}. However, with the increasing interest in corpus studies of language, it has been observed that much of human linguistic production is routinized and prefabricated AUTHOR. Multi-word expressions with a high degree of conventionality do not seem to be marginal or limited linguistic constructions, as they play an important role in our everyday life AUTHOR. In addition, they seem to be used in communication across various contexts, from novels to political debates and therapeutic dialogues AUTHOR. Given their characteristics and their conventionalized meanings, they pose many challenges to language speakers, especially non-native language speakers AUTHOR. However, their characteristics also make them a good case study in different experimental linguistics settings. Recent advancements in Large Language Models (LLMs) and their widespread application have prompted linguists to investigate the performance of these models across various linguistic concepts, including idioms AUTHOR. In addition, in the case of multi-lingual models, an interesting research area is how these models encode the different languages on which they are trained AUTHOR. In this study, a categorization of English and German idioms based on three cross-linguistic degrees of similarity is proposed. One category includes idioms that have similar formal and semantic aspects in these languages; the second includes idioms with formal similarities but different semantic aspects; and the third category includes idioms with similar semantic aspects but different formal aspects. The goal of our work is to consider how cross-linguistic similarities among idioms affect the representation of idioms in mBERT. More specifically, the questions underlying the following experiment were: \item Does cross-linguistic similarity have a significant impact on the representation of idioms in mBERT? \item Does the degree of cross-linguistic similarity and the representation of the model change across the 12 layers of mBERT? We hypothesized that mBERT's performance would depend on how it utilizes its multilingual training data. Namely, if mBERT draws from a collective pool of all languages, it should perform consistently across all cross-linguistic categories, similar to how it represents idioms from the language it has been given, that is English in this case. However, if it primarily retrieves data from specific languages, we expect to observe significant performance differences among the categories, potentially mirroring some of the patterns seen in cross-linguistic studies with second language speakers. That is, identical cross-linguistic idioms should be represented almost similarly to the control idioms (in this case, English idioms), and idioms with formal and lexical correspondence could both be represented similarly and, in some cases, more differently from the control idioms. Finally, idioms with only corresponding semantics and different formal aspects should be the most differently represented idioms compared to the control group. Furthermore, given the proposed categorizations based on formal and semantic similarities, we anticipated varying performance across mBERT's 12 layers. Particularly, in lower layers, we expect less differentiation among categories, as these layers typically capture more surface-level features. While in higher layers, which represent more of the semantic aspects, we anticipate more varying trends and larger differences among the categories. Mostly because we are primarily focused on the figurative meaning of idioms across different categories.
Our study investigated how cross-linguistic similarities among idioms affect their representation in mBERT, with a focus on English and German idioms categorized based on three degrees of cross-linguistic similarity. This study aims to answer two main research questions concerning whether cross-linguistic similarity has a significant impact on the representation of idioms in mBERT, and how the degree of cross-linguistic similarity and the representation of the model change across the 12 layers of mBERT. Our findings provide insights into these questions and our initial hypotheses. Contrary to our initial hypothesis, we found that cross-linguistic similarity does not have a uniformly significant impact on the representations of idioms across all layers of mBERT. The main effects of our translated idioms categorized into cross-linguistic categories (SI: formal and semantic similarity, SL: similar lexicon, SM: Similar Meaning), were not statistically significant when compared to the control category (English idioms) in the early and middle layers of the model. This result may suggest that mBERT might be utilizing knowledge from all languages in its training data as a collective pool, at least in the case of the studied idioms. This aligns with the idea that mBERT learns multilingual representations that go beyond simple vocabulary memorization, as suggested by \citet{pires2019multilingual}. However, the emergence of significant differences in higher layers (particularly from Layer 8 onwards) might indicate that mBERT's processing of idioms becomes more nuanced as information propagates through the network. This finding partially supports our hypothesis that mBERT might show different performances for each cross-linguistic categorization, but suggests that these differences are more significant in the model's deeper layers. Although there are no significant differences among all categories, in Figure fig:cosine-category there is a continuous trend in different layers showing more similarity first for the SL category, then Control, followed by SI, and finally the SM category. This trend indicates that BERT represents almost all categories similarly, and when there are more literal hints, BERT tends to perform better which aligns with the findings of multi-lingual transfer of \citet{pires2019multilingual}. Moreover, for idioms with semantic similarities, the model demonstrates the lowest cosine similarity between the representations of idioms and their figurative meanings, which might suggest that idioms with only semantic correspondence across the studied languages pose a greater challenge for mBERT in capturing the figurative meanings of idioms. Our second research question focused on how the representation of idioms changes across mBERT's 12 layers. In this analysis, distinct patterns were observed. In early layers (1-4) the cosine similarity for CLS embedding derived from mBERT for the idioms and their corresponding figurative meaning was high and relatively uniform across all categories, suggesting a more general representation, we believe high similarity in early layers can be related to similarity in the syntax of samples and the provided figurative entities since these layers capture more formal and syntactic information. Layer 3 demonstrated the highest cosine similarities, while from Layer 5 onwards, a general decrease in cosine similarities was observed, suggesting increased differences between literal and figurative meanings in higher layers. Layer 8 showed the lowest cosine similarities and marked the beginning of significant differences between categories, particularly for semantically similar idioms (SM category). These findings contribute to our hypothesis that we would observe different performances among the layers of mBERT given the formal and semantic similarities of idioms.
22
Distributional Semantics
682_2024
2,024
Riccardo Orlando, Luca Moroni, Pere-Lluís Huguet Cabot, Edoardo Barba, Simone Conia, Sergio Orlandini, Giuseppe Fiameni, Roberto Navigli
Minerva LLMs: The First Family of Large Language Models Trained from Scratch on Italian Data
ENG
8
0
0
Sapienza Università di Roma, CINECA, NVIDIA
3
1
0
1
Giuseppe Fiameni
1
Giuseppe Fiameni
Italy, California (USA)
Rome, Bologna, Santa Clara (California)
% The increasing popularity of Large Language Models (LLMs) has led to a surge in research on adapting existing models to different languages. However, the pretraining of non-English LLMs is still an underexplored area and there is no open-source endeavor that explores what is achievable with open Italian data. To address this issue, we present Minerva, the first family of LLMs trained from scratch on Italian data. The creation of Minerva is an opportunity to explore and investigate the pretraining of LLMs for the Italian language, outlining the challenges that arise when training LLMs with native Italian texts. Minerva demonstrates that an LLM for a specific language brings a number of practical benefits compared to the adaptation of an existing one, including deep control over the composition of the vocabulary and the training data. With this paper, we aim to provide an overview of the design choices, results, and evaluation of our Minerva models, showing promising results on Italian benchmarks and downstream tasks. Most importantly, we share what we learned and the findings obtained during the development of Minerva, as we believe that our experience will be valuable for the academic and industrial communities interested in training non-English LLMs from scratch. The growing interest in Large Language Models (LLMs) has accelerated research efforts to adapt these models for various languages. Despite this, pretraining LLMs from scratch for non-English languages remains underexplored. This is the case for Italian, where no truly open-source research has investigated the pretraining process. To address this gap, we introduce Minerva (), the first family of LLMs trained entirely from scratch on native Italian texts. Our work is the first investigation into the challenges and opportunities of pretraining LLMs specifically for the Italian language, offering insights into vocabulary design, data composition, and model development. With Minerva, we demonstrate that building an LLM tailored to a specific language yields numerous practical benefits over adapting existing multilingual models, including greater control over the model's vocabulary and the composition of its training data. We provide an overview of the design choices, pretraining methods, and evaluation metrics used to develop Minerva, which shows promising performance on Italian benchmarks and downstream tasks. Moreover, we share the lessons learned throughout Minerva's development to support the academic and industrial communities in advancing non-English LLM research. We believe that Minerva serves as an important step towards closing the gap in high-quality, open-source LLMs for non-English languages.
Large Language Models (LLMs) have revolutionized the way Natural Language Processing (NLP) tasks are approached, achieving remarkable results in existing areas and opening the door to entirely new research directions and applications. As a result, the energy and resources dedicated to the study and creation of LLMs are growing exponentially. However, most LLMs -- both closed- and open-source -- are predominantly designed for English, posing significant challenges and limitations for their use in non-English settings. In practice, generating Italian text using multilingual or language-adapted English models, e.g., from Mistral AUTHOR or Llama AUTHOR, is computationally more expensive and often less effective compared to using a model specifically designed for the Italian language. This inefficiency stems from the vocabulary of an English or multilingual LLM -- i.e., the lexical units, or tokens, that the model can use to compose text -- when it is not optimized for the Italian language, resulting in Italian words being split into an excessive number of tokens. Consequently, this creates longer sequences of tokens, slower generation times, and higher computational costs, especially since many popular attention mechanisms have a quadratic complexity with respect to sequence length. Efforts to create language-specific LLMs are increasing, and fall primarily into two main categories: i) adapting existing English-centric LLMs to other languages, and ii) training LLMs from scratch. The advantages of adapting existing English-centric LLMs to other languages are enticing: starting with a proven model can reduce the computational requirements, and adaptation can be achieved with relatively modest amounts of data. There are several language adaptation techniques, which range from fine-tuning the model on data for the target language AUTHOR to modifying the model's architecture AUTHOR, making these techniques flexible for different budgets and objectives. However, these techniques may not fully capture language-specific nuances and can degrade the performance in the original language, indeed an undesirable effect. Alternatively, training LLMs from scratch provides the freedom to make design choices tailored to the linguistic features of the target language—including morphology, lexicon, syntax, and semantics—which are often overlooked in English-centric models AUTHOR. It also allows for incorporating culturally relevant content, reducing biases that might be present in models primarily trained on English data, thus leading to more inclusive and accurate representations of language use. Unfortunately, while there are several efforts on adapting English-centric LLMs to the Italian language, e.g., Llamantino-2 AUTHOR, Llamantino-3 AUTHOR, DanteLLM AUTHOR, and Camoscio AUTHOR, inter alia, there is no truly open-source endeavor exploring what can be achieved by training an LLM from scratch on Italian data. With this work, we follow the latter path and introduce Minerva, the first family of LLMs designed specifically for the Italian language and pretrained on Italian text. We present the design choices for our models, our data processing, and the evaluation results regarding our Minerva LLMs, showing that our models -- with 350M, 1B, 3B, and 7B parameters -- outperform comparable multilingual models and even rival larger models adapted for Italian. We conclude with a discussion on the benefits and challenges of pretraining LLMs from scratch for the Italian language, sharing our experience and findings to provide valuable insights for the academic and industrial communities interested in training non-English LLMs from scratch. Lastly, we describe the technical details of Minerva-7B, our latest model with 7.4 billion parameters, for which we share our initial results.
In this paper, we demonstrated the feasibility and benefits of pretraining Italian language models from scratch, which not only improves the computational efficiency and performance of an LLM for a target language but reduce linguistic biases inherited from English training corpora AUTHOR. The Minerva models () showcase promising results on a variety of Italian benchmarks and downstream tasks, including news summarization and machine translation. Most importantly, we describe, for the first time, the process of creating an Italian pretraining corpus with more than 1T tokens, and we share findings and insights into the pretraining process of Italian LLMs with the academic and industrial communities, paving the way for future research in training non-English language models. We hope that our contributions will represent a stepping stone for future work on language-specific and multilingual large-scale language modeling. Pere-Llu\'is Huguet Cabot, Simone Conia and Edoardo Barba are fully funded by the PNRR MUR project \href{https://fondazione-fair.it/}{PE0000013-FAIR}. Roberto Navigli also acknowledges the support of the PNRR MUR project \href{https://fondazione-fair.it/}{PE0000013-FAIR}. The authors acknowledge the CINECA award IsB28\_medit under the ISCRA initiative for the availability of high-performance computing resources and support. \appendix
1
Language Models
683_2024
2,024
Luca Moroni, Simone Conia, Federico Martelli, Roberto Navigli
ITA-Bench: Towards a More Comprehensive Evaluation for Italian LLMs
ENG
4
0
0
Sapienza Università di Roma
1
0
0
0
0
0
0
Italy
Rome
Recent Large Language Models (LLMs) have shown impressive performance in addressing complex aspects of human language. These models have also demonstrated significant capabilities in processing and generating Italian text, achieving state-of-the-art results on current benchmarks for the Italian language. However, the number and quality of such benchmarks is still insufficient. A case in point is the ``Open Ita LLM Leaderboard'' which only supports three benchmarks, despite being one of the most popular evaluation suite for the evaluation of Italian-language LLMs. In this paper, we analyze the current limitations of existing evaluation suites and propose two ways of addressing this gap: i) a new suite of automatically-translated benchmarks, drawn from the most popular English benchmarks; and ii) the adaptation of existing manual datasets so that they can be used to complement the evaluation of Italian LLMs. We discuss the pros and cons of both approaches, releasing our data to foster further research on the evaluation of Italian-language LLMs.
LLMs are becoming more and more prominent in NLP, showing impressive results on an increasing range of standard benchmarks, thanks in particular to their reasoning and in-context-learning capabilities AUTHOR. The current trend points towards increasingly larger models trained on massive amounts of data AUTHOR.\ However, despite these advancements, there remains a significant gap in the availability of high-quality benchmarks for languages other than English, including Italian, which is often considered too optimistically as a high-resource language. Benchmarks are essential for measuring progress in NLP, providing a standardized way to evaluate and compare models, and this is now especially important for Italian given the growing amount of language-specific models that are being developed for the language AUTHOR. High-quality benchmarks must be well-crafted to ensure they accurately reflect the complexities of the language and the specific challenges it presents. As of today, most of the existing Italian benchmarks are translations of English datasets, which may not fully capture the nuances and unique characteristics of the Italian language. Nevertheless, the ability to automatically translate English benchmarks into Italian is valuable and enticing for two main reasons. First, it provides a way to compare almost 1-to-1 the results obtained in English to the ones obtained in Italian, as the translation process is aimed at keeping an alignment from the source to the target text by design. Second, it provides a quick and relatively simple way of producing a benchmark in Italian, assuming that the translation tool is able to produce high-quality outputs. Unfortunately, the current evaluation suites that are based on automatic translations include only a limited number of benchmarks. For instance, the ``Open Ita LLM Leaderboard'', which is one of the most popular evaluation suites for Italian LLMs, relies on just three main benchmark translations, namely, MMLU, HellaSwag, and ARC-Challenge. This biases and hampers the assessment, and may not allow the advanced capabilities of modern LLMs to be fully analyzed, even though recent efforts are starting to address this limitation AUTHOR. Having gold LLM benchmarks natively written in Italian is also important, as their scarcity hinders the accurate evaluation of LLMs' capabilities in the Italian language, limiting our understanding of their true performance and potential areas for improvement. Indeed, the translation of English-centric benchmarks may contain instances that refer to concepts, entities, cultures, traditions, historic events, politics, and economics that are not akin to what one is more likely to find in Italian texts and/or in Italy AUTHOR. However, the creation of completely new datasets that take into account such elements is difficult, complex, and time-consuming, and requires expert knowledge. Falling in between automatic translations of existing datasets from English and the creation of brand-new datasets in Italian, there is the option of adapting existing Italian datasets that were originally created for a different purpose, to measure the capabilities of LLMs in Italian language understanding and generation. This direction has gained traction over the past few months, with efforts that focus on repurposing Italian tests (usually designed for humans) to evaluate LLMs instead AUTHOR. In this paper, we follow both directions and introduce ITA-Bench, a more comprehensive benchmark suite for the evaluation of Italian-language LLMs. First, ITA-Bench proposes a new extended suite of benchmarks created by automatically translating the most popular English benchmarks into Italian.\ Second, ITA-Bench includes existing manually curated datasets, adapted to enhance the evaluation framework for Italian LLMs. These two complementary approaches aim to bridge the evaluation gap and provide a more thorough understanding of the capabilities of Italian-language LLMs. With ITA-Bench, we hope to foster further development and refinement of evaluation techniques for Italian LLMs, ultimately contributing to the broader field of multilingual NLP. ITA-Bench is available at .
In this paper, we introduce a novel evaluation suite aimed at advancing the Italian community's ability to assess the competencies of LLMs on Italian data. Our approach follows two main directions. First, we define a novel pipeline called OBenTO, which involves translating state-of-the-art English benchmarks into Italian. Second, we rephrase existing Italian benchmarks to be used for prompting and testing large language models. Additionally, we conduct a comprehensive evaluation of the quality of automatically translated benchmarks, highlighting the inherent challenges of such an approach and analyzing the errors made by four LLMs. We hope that our work can provide a solid evaluation framework for evaluating the capabilities of current and future LLMs in Italian. Simone Conia gratefully acknowledges the PNRR MUR project URL, which fully funds his fellowship. Federico Martelli and Roberto Navigli acknowledge the support of the CREATIVE project (CRoss-modal understanding and gEnerATIon of Visual and tExtual content, Progetti di Interesse Nazionale - PRIN 2020). Finally, we acknowledge the work of the M.Sc. students of Prof. Navigli's multilingual NLP course for the Academic Year 2024, whose contributions provided valuable insights and ideas for the adaptation of existing Italian benchmarks. We acknowledge the CINECA award IsB28\_medit under the ISCRA initiative for the availability of high-performance computing resources and support. \appendix
1
Language Models
684_2024
2,024
Vincenzo Norman Vitale, Loredana Schettino, Francesco Cutugno
Modelling filled particles and prolongation using end-to-end Automatic Speech Recognition systems: a quantitative and qualitative analysis.
ENG
3
1
0
Università di Napoli Federico II, Libera Università di Bolzano
2
0
0
0
0
0
0
Italy
Naples, Bolzano
State-of-the-art automatic speech recognition systems based on End-to-End models (E2E-ASRs) achieve remarkable performances. %Studies have investigated the systems' modeling of specific linguistic features to explain their internal dynamics. However, phenomena that characterize spoken language such as fillers (\textless{}eeh\textgreater{} \textless{}ehm\textgreater{}) or segmental prolongations (the\textless{}ee\textgreater{}) are still mostly considered as disrupting objects that should not be included to obtain optimal transcriptions, despite their acknowledged regularity and communicative value. A recent study showed that two types of pre-trained systems with the same Conformer-based encoding architecture but different decoders – a Connectionist Temporal Classification (CTC) decoder and a Transducer decoder – tend to model some speech features that are functional for the identification of filled pauses and prolongation in speech. This work builds upon these findings by investigating which of the two systems is better at fillers and prolongations detection tasks and by conducting an error analysis to deepen our understanding of how these systems work.
In recent works on Automatic Speech Recognition (ASR) systems based on the computing power of Deep Neural Networks (DNN), a great deal of effort is focused on incrementing the systems' performances by employing increasingly complex, hence hardly interpretable, DNN models that require huge amounts of data for the training, like End-to-End Automatic Speech Recognition (E2E-ASR) models which represent the state-of-the-art. An E2E-ASR model directly converts a sequence of input acoustic feature vectors (or possibly raw audio samples) into a series of graphemes or words that represent the transcription of the audio signal AUTHOR, as represented in figure fig:e2e_asr. In contrast, traditional ASR systems typically train the acoustic, pronunciation, and language models separately, requiring distinct modelling and training for each component. \centering \includegraphics[width=0.9\linewidth]{images/e2e_asr.png These systems usually aim to obtain speech transcriptions \lq cleaned\rq from phenomena that characterise spoken language such as discourse markers, particles, pauses, or other phenomena commonly referred to as \lq disfluencies\rq. Studies on the interpretability of the dynamics underlying neural models showed that state-of-the-art systems based on End-to-End models (E2E-ASRs) can model linguistic and acoustic features of spoken language, which can be investigated to explain their internal dynamics. Several probing techniques have been designed to inspect and better understand the internal behavior of DNN layers at different depths. With these techniques, investigations on the internals of DeepSpeech2 AUTHOR revealed the influence of diatopic pronunciation variation in various English varieties and provided evidence that intermediate layers contain information crucial for their classification. Later, a study AUTHOR on the layerwise capacity to encode information about acoustic features, phone identity, word identity, and word meaning based on the context of occurrence highlighted that the last layer right before the decoding module retains information about word meaning information, rather than local acoustic features and phone identity information that are captured by the first layers and intermediate layers respectively. Then, other studies have further investigated the capacity of state-of-the-art models to encode phonetic/phonemic information AUTHOR, lexical tone AUTHOR and gender AUTHOR. Finally, AUTHOR investigated the internal dynamics of three pre-trained E2E-ASRs evidencing the emergence of syllable-related features by training an acoustic-syllable boundary detector. Following this line of research, a recent study AUTHOR investigated the ability of two types of pre-trained systems with the same Conformer-based encoding architecture but different decoders – a Connectionist Temporal Classification (CTC) decoder and a Transducer decoder – to model features that distinguish filled pauses and prolongations in speech and showed that, despite not being originally trained to detect disfluencies, these systems tend to model some speech features that are functional for their identification. Rather than disregarding the ability of E2E-ASRs to model the acoustic information tied to such speech phenomena as a dispensable noise source, it could be exploited to achieve different ends. On the one hand, it could be used to obtain more accurate transcriptions that provide better, or rather more faithful, representations of the speech signal, which would also support linguistic annotation processes. On the other hand, exploring the systems' modelling ability leads to deepening our understanding of their underlying dynamics. In the last 20 years, disfluency detection tasks have been conducted to improve speech recognition performances AUTHOR and different recent approaches to filler detection achieve rather high performances, see AUTHOR. However, these investigations mostly concern filler particles and, to our knowledge, no such system has been tested on Italian data so far. The proposed work aims to build upon these findings by investigating which of the two decoding systems is better at performing a detection task for fillers and prolongations. Moreover, a quantitative and qualitative error analysis is conducted to deepen our understanding of the way these systems work.
In this work, we build upon a previous study that investigated to what extent modern ASR E2Es encode features related to disfluency phenomena, even if they are not directly trained to do so. We showed that pre-trained models with the same audio encoder but with two different state-of-the-art decoding strategies (CTC and Transducer) capture disfluency-related features, especially in the latest encoding layer, and both model features that can be used for the identification and positioning of disfluent speech segments AUTHOR. Although there seems to be a tendency to forget this information with subsequent layers, as the trends for DTW (figure fig:dtw-distance-sequence) and F1-measure (figure fig:weighted_f1) would suggest, the last layers, which are those closest to the objective function represented by the decoding module, seem the most prone to retain characteristics useful to locate and identify disfluency phenomena. Interestingly, despite the differences between the two decoding modules which are respectively non-recurrent (CTC) and recurrent (RNN-T), the performances for the chosen task are comparable. However, the confusion matrices highlight that the CTC-based classifier performs better in the disfluency feature discrimination task, while the Transducer-based classifier more precisely identifies filled pauses, which could be related to the scope (recurrent/non-recurrent) of the objective function. The results align with the literature that shows a strong sensitivity to features concerning words and phone of the layers closest to the encoder AUTHOR, while the layers closest to the input are more sensitive to features related to accent and local acoustic characteristics AUTHOR. It is worth noticing that, in a recent work AUTHOR, sensitivity to syllabic boundaries was found in layers 3-5, with a pattern similar to the one shown in Figure fig:per_layer_metrics but without the peak in the last layers. The reason can be found in the fact that syllables and their boundaries do not have a graphic distinction in the transcriptions, conversely, in the case of disfluencies, there is a form of transcription that identifies them within a language model. The exploratory analysis of the errors highlighted that prolongations are more difficult to detect than filled pauses, which could depend on their being an integral (though lengthened) part of \lq fluent\rq words while filled pauses are mostly realized as independent elements. Also, instances of prolongation are mostly non-recognized or misclassified as filled pauses when characterized by peculiar \lq non-prototypical\rq phonation features, such as creaky phonations, or filler-like features, as in the case of monosyllabic word-final prolongations. Also, previous studies on the segmental quality of prolongations in Italian AUTHOR showed that prolongations, especially when concerning consonantal sounds, can be realised with schwa sounds similar to those that characterize most filled pauses. This filler-like quality could also be considered among the underlying reasons for the negative correlation between the evaluation metrics of prolongations misclassification and their duration. Another possible motivation could reside in a bias in the dataset combined with the classifier architecture (LSTM), which easily recognises prolongations responding to a specific length pattern. This means that the scarcity of longer prolongations hinders their modelling leading to their misclassification. These findings could be used to improve transcription applications by enriching them with disfluency annotation (including filler particles and prolongation phenomena), which are still rather costly processes for studies concerning hesitation phenomena and (own) speech management in typical as well as atypical speech (e.g., pathological or language learners' speech. Indeed, an immediate development of the described work consists of increasing the capabilities of the pre-trained E2E-ASRs by adding a simple disfluency identification module to complement the existing decoder, thus enriching the resulting transcriptions. Our work is built upon unidirectional LSTMs rather than on bidirectional LSTMs (BiLSTMs), which provide better performance because the latter have slightly longer inference times and require a larger amount of data, resources, time to be trained and, most importantly, present a more complex behaviour AUTHOR. However, the introduction of different architecture modules like bidirectional LSTM could improve the detection of prolongation disfluencies. This will be part of future developments focused on performance and increased neural network complexity.
13
Multimodal
685_2024
2,024
Marco Vassallo, Giuliano Gabrieli, Valerio Basile, Cristina Bosco
Neutral Score Detection in Lexicon-based Sentiment Analysis: the Quartile-based Approach
ENG
4
1
0
CREA Research Centre for Agricultural Policies and Bio-economy, Università di Torino
2
0
0
0
0
0
0
Italy
Rome, Turin
The neutrality detection in Sentiment Analysis (SA) still constitutes an unsolved and debated issue. This work proposes an empirical method based on the quartiles of the polarity distribution for a lexicon-based SA approach. Our experiments are based on the Italian linguistic resource MAL (Morphologically-inflected Affective Lexicon) and applied to two annotated corpora. The findings provided a better detection of the neutral expressions with preserving a substantial overall polarity prediction.
Sentiment Analysis (SA) is a well-studied task of Natural Language Processing (NLP), whose main objective is to classify opinions from natural language expressions as positive, neutral, negative or a mixture of those [1]. The neutrality detection in SA is an issue approached in different ways [2, 3, 4], but low agreement on how detecting neutral expressions still exists [4, p.136]. In this paper, we approach neutrality detection in lexicon-based SA, where an affective lexicon provides polarity scores ranging from −𝑎 to +𝑎 with 𝑎 ∈ 𝑁, by using a descriptive statistical method based on the quartiles. To our knowledge, this issue was not investigated so far. We aim at drawing attention towards a better prediction of the neutral expressions. This is done by automatically finding out an optimal interval of neutral scores with a control for the asymmetry of the distribution of the scores across the polarity spectrum. Traditionally, neutrality scores have been assumed to be around point 0, or within a conventionally fixed and algebraically-led interval of [−.5;+.5]. Conversely, it seems more reasonable to postulate that this neutral cluster should lie in a dynamic interval around the zero value. As expected, the [−.5;+.5] interval is indeed insufficient for capturing the neutral values, especially when the polarity scores are symmetrical around the point zero. This is because small positive or negative deviations from zero can be incorrectly classified into their respective polarity if they are neutral. Furthermore, for topics with many controversial opinions, where polarities are indeed dispersed, the misclassification of neutral expressions appears significant, as small positive and negative deviations from zero might be more frequent. As a consequence, the neutral interval also appears to be topic-oriented and thus differs from any SA task, as the topic could, in turn, also influence the symmetry of the distribution of scores. The linguistic counterpart to this phenomenon is that “opinions may be so different that common ground may not be found” [5]. On the other hand, especially in the case of unimodal distributions, the more asymmetrical the polarity scores distribution is, the more the polarities might be positively or negatively skewed, and the less likely a false neutral classification should occur. In the case of multimodal distributions, with multiple possible polarizations, detecting the asymmetry becomes more complex as well as the neutral expressions. But, despite the peculiar situation with the same frequencies for oppositely polarized scores, the more a multimodal distribution is skewed (many different modes/peaks possibly far from zero) the less likely false neutral classifications should again occur.
The asymmetry of a polarity scores distribution seems to be topic-oriented and therefore the neutrality detection for a lexicon-based SA with polarity scores reasonably passes through an optimal interval within the first and the third quartile [𝑄1,𝑄3] that takes this asymmetry into account. The findings of this work stipulated that the quartile-based approach is suitable for any corpus where a task of lexicon-based SA with scores is performed. Hence, we do strongly recommend further experiments on other corpora, both annotated and unannotated, and comparing/integrating this method with others (e.g. Valdivia et al. [4]) for the common objective of detecting neutral expressions. Eventually, it is worthwhile noticing that our methodological framework led us to run experiments on test sets of different sizes in order to consider all potential and reasonable unseen data situations. Alternatively, one could propose a similar experiment with fixed-size test sets, which would have provided more stable, comparable results even with established benchmarks, but on the other hand would also significantly reduce the amount of test data.
6
Sentiment, Emotion, Irony, Hate
686_2024
2,024
Tom Bourgeade, Silvia Casola, Adel Mahmoud Wizani, Cristina Bosco
Data Augmentation through Back-Translation for Stereotypes and Irony Detection
ENG
4
2
0
LORIA University of Lorraine, Ludwig-Maximilians-University Munich, Università di Torino
3
1
0
2
Tom Bourgeade, Silvia Casola
0
0
Italy, France, Germany
Villers-lès-Nancy, Munich, Turin
Complex linguistic phenomena such as stereotypes or irony are still challenging to detect, particularly due to the lower availability of annotated data. In this paper, we explore Back-Translation (BT) as a data augmentation method to enhance such datasets by artificially introducing semantics-preserving variations. We investigate French and Italian as source languages on two multilingual datasets annotated for the presence of stereotypes or irony and evaluate French/Italian, English, and Arabic as pivot languages for the BT process. We also investigate cross-translation, i.e., augmenting one language subset of a multilingual dataset with translated instances from the other languages. We conduct an intrinsic evaluation of the quality of back-translated instances, identifying linguistic or translation model-specific errors that may occur with BT. We also perform an extrinsic evaluation of different data augmentation configurations to train a multilingual Transformer-based classifier for stereotype or irony detection on mono-lingual data. Warning: This paper may contain potentially offensive example messages.
Equipping systems with linguistics-grounded capabilities can be complex. Despite the advancements by Large Language Models (LLMs), the availability of annotated corpora remains crucial. State-of-the-art systems still exhibit shortcomings, for example, when access to context or pragmatics for giving a true comprehension of the features of the involved phenomena is required [1]. Unfortunately, the development of large datasets annotated for specifically complex phenomena can be very time-consuming. When only small corpora are available, data augmentation techniques can be applied [2, 3]. Given a small set of original sample data, data augmentation artificially generates new instances that are similar and comparable to the existing data and can, therefore, be used to train and test systems with an extended dataset. In this paper, we present experiments for augmenting two small datasets annotated for two diverse, challenging phenomena, namely stereotypes and irony detection. In several works exploring data augmentation, Back-Translation (BT) [4] was shown to be a strong and relatively easy-to-implement baseline [5, 6]. A BT process generally consists of two steps: given one or multiple translation systems, a text in a source language is first translated into a chosen pivot language, and the resulting text is then translated back into the source language. The expected output of the BT process is a text that is similar but not the same as the original input, accounting for the linguistic differences intrinsic to the language pair, but also the idiosyncrasies of the chosen translation model(s). This relies on the fact that translation is only partially deterministic: the expected output should have the same meaning as the input, outputs that morphologically or syntactically differ may be considered as correct translations of the input. In BT, the application of (at least) two translations improves the variability between the input and the output text. The usefulness of a dataset augmented by applying BT depends on the quality of the translated outputs. Outputs too similar to the inputs can cause overfitting when used for training, while with too different outputs, there is a risk of a shift in distribution that is too large, which may negatively impact performance, at least in intradataset evaluations. A compromise between these two alternatives must be found. Therefore, an evaluation of the quality of translations and back-translations is important to assess the benefits. In this paper, we want to investigate the viability of BT as a data augmentation technique for low-resource tasks in various configurations. We use French and Italian as source languages — leveraging two multilingual datasets with subsets for these languages — and various languages as pivots for the BT process (French/Italian, English, and Arabic). We compare BT with an alternative process for data augmentation, specific for multilingual datasets, which we refer to as “cross-translation,” where the data from one language subset is translated and then used as a data augmentation source for another language subset. Our contributions are (1) an intrinsic qualitative human evaluation of translations and back-translations for stereotypes detection and irony detection datasets in various combinations of source and pivot languages, followed by (2) an extrinsic evaluation of machine learning model performance on these datasets, using these various data augmentation sources.
In this work, we have investigated using Back-Translation as a data augmentation technique for challenging low-resource tasks like stereotypes and irony detection, in a multilingual context. Through an intrinsic evaluation of the quality of the augmented instances, we identified modes of failure of Machine Translation, which could negatively impact the data augmentation process. These errors stem from the intrinsic differences between typologies and specific languages or translation model idiosyncrasies themselves potentially learned from methods like BT. Through a preliminary extrinsic evaluation of two multilingual datasets, we found that cross-translation can outperform Back-Translation, allowing us to augment one language subset by leveraging the variety of inputs present in the others. In future work, we aim to expand this study to more numerous and varied source and pivot languages, and different data augmentation configurations, namely, different proportions and selections of injected augmented data. We may also compare Back and Cross-Translation against or alongside other related techniques, such as multitasking learning or Active Learning. We also expect that some improvements can be obtained by mitigating translation failures; this can be done, for example, by leveraging an external LLM to check each step and remove or correct the errors from the final augmented dataset. Finally, it could also be interesting to perform tests with different model types on top of RoBERTa.
6
Sentiment, Emotion, Irony, Hate
687_2024
2,024
Jan Nehring, Akhil Juneja, Adnan Ahmad, Roland Roller, Dietrich Klakow
Dynamic Prompting: Large Language Models for Task Oriented Dialog
ENG
5
0
0
German Research Center for Artificial Intelligence, Technische Universität Berlin, Saarland University
3
1
1
5
Jan Nehring, Akhil Juneja, Adnan Ahmad, Roland Roller, Dietrich Klakow
0
0
Germany
Kaiserslautern, Berlin, Saarbrücken
Large Language Models show impressive results in many different applications, most notably in the context of question-answering and open dialog situations. However, it is still an open question how to use those models for task-oriented dialogs such as booking or customer information systems, and such. In this work, we propose Dynamic Prompting, an architecture for task-oriented dialog, integrating the benefits of Large Language Models and showcasing the approach on the MultiWOZ 2.2 dataset. Our architecture leads to a high task success rate, provides sensible and specific answers, and is resistant to hallucinations. Further, we show that Dynamic Prompting is able to answer questions that were not anticipated by the dialog system's designer and that it can correct several types of errors and other characteristics of the system.
Task-Oriented Dialog Systems (TODS) assist users in completing a task within a conversation [1], for instance, in the context of customer information and bookings (train/restaurant). In an applied setting with real users, it is important that those systems provide correct answers, tasks can be quickly solved, and lead ideally to high user satisfaction. To ensure this, TODS often provide a high level of control over its dialog management and answer behavior for system developers. Existing solutions normally either manually implement a dialog manager to control the complete interaction, or train it on large amounts of dialog interactions [2, 3, 4, 5]. In contrast, Large Language Models (LLMs) are very good at open-domain dialog and provide fluent and convincing messages in different styles. However, those answers might be misleading and even false (hallucination) [6, 7, 8]. In task-oriented dialog, the model could possibly ‘break out’ of the given dialog task. Using LLMs for task-oriented dialog is still in its infancy. Madotto et al. [9] used LLMs for the whole pipeline of Natural Language Understanding, Dialog State Tracking, Dialog Policy and Natural Language Generation. Hudeček and Dusek [10] expand on this idea by evaluating the abilities of LLMs to generate complete task-oriented multi-turn dialogs. They used LLMs for NLU and DST also but, unlike our work, they used a static prompt. Other approaches to LLMs for task-oriented dialog are presented by Cao [11], Hu et al. [12], Wei et al. [13], Li et al. [14]. To address those limitations and concerns, we propose Dynamic Prompting, a technique to combine a traditional task-oriented dialog system pipeline with the benefits of LLMs. Showcased and tested in the context of restaurant booking, we present the advantages and limitations of our approach.
We presented Dynamic Prompting, a technique integrating LLMs for task-oriented dialog. The results show high sensibility and specificity values, which indicate that the system answers on point and does not deviate from the dialog’s goal. The relatively low Prompt Extraction Performance and Response Slot Accuracy values still result in excellent task success. The high values in the performance metrics Prompt Instruction Performance and Information Extraction Performance indicate that the LLM follows the task-oriented guidance of the dynamic prompts. The Information Extraction Performance of 0.98 shows that the system could very well reuse the database information embedded in the prompt in the JSON format. In addition, our system shows various ways to correct errors, such as NLU errors, user requests not anticipated by the designer of DS, and errors in the format of the database entries. Moreover, the generated system answers are more diverse (Section 3.1.4) and more polite (Section 3.1.2) than the human-generated responses in the dataset. We would like to examine these qualitative results in future research in a more quantitative way. Overall, we find that the widespread problem of hallucinations in LLMs is not an issue in our system as long as we present the correct information to the LLM. As soon as the user asks the system for information that is not present in the prompt, such as the booking reference numbers, the LLM starts to hallucinate. Although we assess the system’s performance solely on the restaurant domain, the dynamic prompting method can be extended to other domains in the MultiWOZ 2.2 dataset, such as hotel, taxi, and train. Expanding to new domains will require updating the prompt generation module to accommodate new intents and state values, ensuring smooth integration with these additional domains.
3
Chatbots and Dialogue Systems
688_2024
2,024
Gaia Caligiore, Raffaele Mineo, Concetto Spampinato, Egidio Ragonese, Simone Palazzo , Sabina Fontana
Multisource Approaches to Italian Sign Language (LIS) Recognition: Insights from the MultiMedaLIS Dataset
ENG
6
2
1
Università di Modena e Reggio Emilia, Università di Catania
2
0
0
0
0
0
0
Italy
Modena, Catania
Given their status as unwritten visual-gestural languages, research on the automatic recognition of sign languages has increasingly implemented multisource capturing tools for data collection and processing. This paper explores advancements in Italian Sign Language (LIS) recognition using a multimodal dataset in the medical domain: the MultiMedaLIS Dataset. We investigate the integration of RGB frames, depth data, optical flow, and skeletal information to develop and evaluate two computational models: Skeleton-Based Graph Convolutional Network (SL-GCN) and Spatiotemporal Separable Convolutional Network (SSTCN). RADAR data was collected but not included in the testing phase. Our experiments validate the effectiveness of these models in enhancing the accuracy and robustness of isolated LIS signs recognition. Our findings highlight the potential of multisource approaches in computational linguistics to improve linguistic accessibility and inclusivity for members of the signing community.
Italian Sign Language (LIS- Lingua dei Segni Italiana) is the primary means of communication within the Italian signing community. Due to their visual-gestural modality, sign languages (SLs) were initially not considered fully-fledged linguistic systems. However, since the 1960s, beginning with Stokoe’s pioneering works [1], the contemporary study of SLs has evolved into a robust field of research. Over the past half-century, significant societal and scientific advancements have transformed the perception and status of SLs, now recognized as natural and complete languages, having received legal recognition in many countries. In the Italian context, the study of signed communication began in the early 1980s, involving both hearing and deaf researchers. At that time, what we now call LIS was still mostly unnamed and was often referred to as ‘mime’ or ‘gesture’ by both signers and non-signers alike [2]. The first significant publications on LIS [3] [4], along with the collaborative efforts of deaf and hearing researchers, initiated a transformative period in SL research in the Italian context [5]. This shift in perspective was influenced by factors beyond the language itself, such as increased meta-linguistic awareness and greater visibility of the community and its language to the wider public. In fact, from a societal perspective, the visibility of SL in Italy, especially in media, has significantly changed with technological advancements, mirroring global trends. In the late 1980s, Italy introduced subtitles in movies on television, marking a step toward content accessibility. The importance of media accessibility, through subtitles or LIS interpreting, was accentuated during the COVID-19 pandemic. The need for equitable access to critical information for deaf individuals became evident, with efforts born within the community stressing the central role of LIS in ensuring that the deaf signers received accessible information during challenging times [6], highlighting the significant communication barriers that deaf individuals face, especially when in-person interactions were restricted. This increased visibility, along with persistent advocacy by the signing community, played a crucial role in the official recognition of LIS and Tactile LIS (LISt) in May 2021. Within this evolving societal and linguistic framework, the increased media visibility of LIS and the introduction of video capturing tools in daily lives, language collection emerges as a central issue. For SLs, the need for comprehensive collections is particularly significant. Unlike oral languages, which in some cases have developed standardized written systems, SLs must rely on video collections to capture signed communication accurately. These videos, whether raw or annotated, are essential for analyzing SLs with both qualitative and quantitative evidence.
In this study, our goal was to demonstrate our first steps into testing the efficacy of the MultiMedaLIS Dataset in contributing to the advancement of the field of SLR through multisource approaches. The integration of RGB frames, depth data, optical flow, and skeletal data has provided a comprehensive basis for developing and evaluating SLR models. Our experiments with the SL-GCN and SSTCN architectures have highlighted advancements in recognizing isolated LIS signs in medical semantic contexts, given the domain of our Dataset. The SL-GCN model, trained on skeletal data to construct temporal graphs, achieved accuracy in capturing spatiotemporal relationships critical to sign recognition. This approach not only enhances the precision of rendering LIS signs but is also reinforced by a Dataset able to support robust graph-based convolutional networks in multimodal SLR tasks. At the same time, our Dataset proved robust, precise and variable enough for SSTCN model testing, focusing on spatiotemporal separable convolutions, revealing robust performance in extracting spatial dynamics from RGB frames. Having validated the visual modalities on the mentioned models, we have promising preliminary results on adapting these models to accept RADAR data. We plan to extract the pre-trained RADAR data processing module and use it independently during inference. This approach will eliminate the need for RGB visual data. Furthermore, we plan to expand the Dataset by applying the same protocol with 10 deaf signers. This will effectively increase the current Dataset, enhancing the generalizability across different signers. Our goal is to develop an autonomous, resource-constrained system (thanks to the exclusion of RGB data) that operates on-edge or even offline. This cost-effective solution can be used in any emergency contexts where direct access to interpreting is not available.
13
Multimodal
689_2024
2,024
Laura Occhipinti
Introducing MultiLS-IT: A Dataset for Lexical Simplification in Italian
ENG
1
1
1
Università di Bologna
1
0
0
0
0
0
0
Italy
Bologna
Lexical simplification is a fundamental task in Natural Language Processing, aiming to replace complex words with simpler synonyms while preserving the original meaning of the text. This task is crucial for improving the accessibility of texts, particularly for users with reading difficulties, second language learners, and individuals with lower literacy levels. In this paper, we present MultiLS-IT, the first dataset specifically designed for automatic lexical simplification in Italian, as part of the larger multilingual Multi-LS dataset. We provide a detailed account of the data collection and annotation process, including complexity scores and synonym suggestions, along with a comprehensive statistical analysis of the dataset. With MultiLS-IT, we fill a significant gap in the field of Italian lexical simplification, offering a valuable resource for developing and evaluating automatic simplification models. Our analysis highlights the diversity of complexity levels in the dataset and discusses the moderate agreement among annotators, underscoring the subjective nature of lexical complexity assessment.
Lexical simplification is a highly complex task within Natural Language Processing, encompassing broader automatic text simplification efforts [1]. It is defined as the task of replacing complex words with simpler synonyms that are more accessible to speakers, while preserving the original text’s meaning [2]. A complex word is one that is difficult for some readers to decode due to various characteristics that hinder comprehension [3, 4]. This area of research is of significant interest both socially and in computational applications. Socially, automatic simplification can enhance text comprehension for individuals with reading difficulties [5, 6], second language learners [7], those with cognitive disabilities [8], or individuals with lower literacy levels [9]. In general, making texts accessible to everyone is a democratic act, as it ensures that information and knowledge are available to all members of society, regardless of their reading ability or educational background [10]. From a computational perspective, it proves valuable for complex tasks such as machine translation [11], information retrieval [12], and summarisation [13] in addition to being an integral part of generic text simplification [1]. The ability to simplify text effectively can improve the performance of these applications by making the input data more uniform and easier to process [2]. Lexical simplification encompasses various subtasks [14]. The two most important ones are: 1. the prediction of word complexity, which involves identifying the words that need to be simplified [15]; 2. the replacement of complex words with simple synonyms [16]. Lexical complexity prediction (1) normally involves assigning a complexity value to a lexical item in context, ranging from 0 to 1, where 0 represents maximum simplicity and 1 denotes complexity [4]. This approach is a more advanced evolution of the traditional binary Complex Word Identification (CWI) [3], which classified words simply as complex or not complex. By moving towards a gradualism approach, lexical complexity prediction provides a finer-grained, continuous assessment of word difficulty, allowing for more tailored simplification efforts. The replacement of complex words with simpler synonyms (2) comprises three subtasks: the generation of substitutes, the ranking based on complexity, and the selection of the most appropriate substitute [14]. This multi-step process ensures that the chosen synonym not only reduces complexity but also fits seamlessly into the original context. One of the major challenges for such a user-dependent and therefore complex task is the lack of extensive annotated linguistic resources needed to train and evaluate automatic simplification models [2, 4]. Annotated datasets are crucial for developing and testing algorithms that can perform these tasks accurately. In this context, we present MultiLS-IT, which is, to the best of our knowledge, the first dataset specifically designed for automatic lexical simplification in the Italian language. This resource is part of a larger multilingual dataset, Multi-LS (Multilingual Lexical Simplification) [17], created for a shared task at the BEA workshop [18]1. The main contributions of this work are: • A detailed description of the data collection and annotation process of the Italian sub-dataset; • A descriptive analysis including statistics and visualizations providing an overview of the dataset’s characteristics; • The establishment of a reference point for future research in lexical simplification for Italian. With this work, we aim to fill a significant gap in lexical simplification research for Italian and provide a solid foundation for future studies and more effective lexical simplification technologies.
In this study, we present MultiLS-IT, the first dataset specifically designed for automatic lexical simplification in Italian. As part of the larger Multi-LS dataset, it addresses a significant gap in resources for lexical simplification in Italian. Despite its limited size, we believe that MultiLS-IT offers a valuable starting point for the development and evaluation of automatic simplification models. Our detailed description of the data collection and annotation process, including complexity ratings and synonym suggestions, provides a protocol that we hope will be followed and extended to increase the resources available for the Italian language. Our analysis revealed that the average complexity score of all target words is 0.276, with a standard deviation of 0.168, highlighting the range of complexity levels within the dataset. Including more diverse and complex contexts would provide a richer resource for training and evaluating simplification models. The inter-annotator agreement value of 0.4230 reflects a moderate level of consistency among annotators, emphasizing the inherent subjectivity in assessing lexical complexity. This relatively low value highlights the need to increase the sample size of both the dataset and the number of annotators to obtain more robust results. Future work should focus on expanding the dataset to include a greater variety of texts and more annotators to improve the reliability and generalizability of the results. Our goal is to create broader resources that enable the development of robust and effective lexical simplification technologies that can improve text accessibility and comprehension for a wide range of readers. In conclusion, while MultiLS-IT represents a significant step forward in the field of lexical simplification for Italian, there is still considerable potential for growth. Expanding the dataset to include a broader range of texts, increasing the number of annotators, and refining the annotation guidelines are all crucial steps toward improving the dataset’s quality. Additionally, the application of more advanced computational models and the exploration of real-world use cases will further contribute to the development of sophisticated tools for lexical simplification. We hope that this dataset will serve as a foundation for future research and development in automatic simplification, ultimately making information more accessible and comprehensible to all.
11
Text Simplification
690_2024
2,024
Laura Occhipinti
Enhancing Lexical Complexity Prediction in Italian through Automatic Morphological Segmentation
ENG
1
1
1
Università di Bologna
1
0
0
0
0
0
0
Italy
Bologna
Morphological analysis is essential for various Natural Language Processing (NLP) tasks, as it reveals the internal structure of words and deepens our understanding of their morphological and syntactic relationships. This study focuses on surface morphological segmentation for the Italian language, addressing the limited representation of detailed morphological information in existing corpora. Using an automatic segmentation tool, we extract quantitative morphological parameters to investigate their impact on the perception of word complexity by native Italian speakers. Through correlation analysis, we demonstrate that morphological features, such as the number of morphemes and lexical morpheme frequency, significantly influence how complex words are perceived. These insights contribute to improving automatic lexical complexity prediction models and offer a deeper understanding of the role of morphology in word comprehension.
Morphological analysis is crucial for various NLP tasks, as it provides insights into the internal structures of words and helps us better understand the morphological and syntactic relationships between words [1]. The Italian language, with its rich morphology and extensive use of inflection and derivation, presents unique challenges and opportunities for morphological segmentation. Automatic segmentation, a key component of morphology learning, involves dividing word forms into meaningful units such as roots, prefixes, and suffixes [2]. This task falls under the broader category of subword segmentation [3] but is distinct due to its linguistic motivation. Computational approaches typically identify subwords based on purely statistical considerations, which often results in subunits that do not correspond to recognizable linguistic units [4, 5, 6, 7]. Making this task more morphologically oriented could enable models to generalize better to new words or forms, as basic roots or morphemes are often shared among words, and it could also facilitate the interpretation of model results. When discussing morphological segmentation, we can refer to two types: (1) Surface segmentation, which involves dividing words into morphs, the surface forms of morphemes; (2) Canonical segmentation, which involves dividing words into morphemes and reducing them to their standard forms [8]. For instance, consider the Italian word mangiavano (they were eating). The resulting surface segmentation would be mangi- + -avano, where mangi- is a morph derived from the root of the verb mangiare, and -avano is the suffix indicating the third person plural of the imperfect tense. In contrast, the canonical segmentation would yield mangiare + -avano, with mangiare as the canonical morpheme and -avano as the suffix. In this study, we focus on surface morphological segmentation for the Italian language. Morphological features are often not adequately represented in available corpora for this language, or they refer exclusively to morphosyntactic information, such as the grammatical category of words and a macro-level descriptive analysis mainly related to inflection. Information about the internal structure of words, such as derivation or composition, is often lacking. The primary objective of this work is to use an automatic segmenter to extract a series of quantitative morphological parameters. We believe that our approach does not require the detailed analysis provided by canonical segmentation, which could entail longer processing times. In addition to examining classic parameters reported in the literature that influence complexity [9], such as word frequency, length, and number of syllables, we aim to explore how morphological features integrate with these factors to affect word complexity perception. Specifically, we seek to understand how the internal structure of words contributes to the cognitive load that speakers experience when processing more complex lexical items. Our premise is that words with more morphemes are more complex because they contain more information to decode [10]. For example, consider the word infelicità (unhappiness). To decode it, one must know the word felice (happy), from which it is derived, as well as the prefix in-, which negates the quality expressed by the base term, and the suffix -ità, which transforms the adjective into an abstract noun. Therefore, to fully understand the meaning of infelicità, the reader or listener must be able to correctly recognize and interpret each of these morphemes and their contribution to the overall meaning of the word. The main contributions of this work are: (1) Providing a tool capable of automatically segmenting words into linguistically motivated base forms; (2) presenting the dataset constructed for training our model; (3) evaluating the impact of different linguistic features on speakers’ perception of word complexity, with a particular focus on morphological features.
This study highlights the significance of integrating morphological features into automatic models to enhance the comprehension and prediction of lexical complexity. The high performance of the Neural Morpheme Segmentation model demonstrates the efficacy of convolutional neural networks in capturing the detailed patterns of morphological segmentation in the Italian language. The correlation analysis reveals that while traditional metrics like word length and frequency are valuable predictors of complexity, incorporating morphological features provides additional insights that enrich our understanding of lexical complexity. Notably, the positive correlation between the number of morphemes and perceived complexity suggests that words with more morphemes are inherently more complex. Conversely, frequent lexical morphemes tend to reduce perceived complexity, highlighting the importance of familiarity in complexity perception. Our study also emphasizes the need for diverse linguistic features, including both surface characteristics and morphological traits, to create more robust and accurate models for predicting word complexity. The statistically significant correlations for most features validate their relevance in complexity prediction. However, it is important to note that our findings are based on a relatively small dataset of annotated complexity perceptions. To obtain more robust and generalizable results, it would be highly beneficial to have access to a larger and more diverse dataset of complexity annotations. Expanding the dataset to include a wider variety of texts and contexts would enhance the reliability of the correlations observed and improve the training and evaluation of automatic complexity prediction models. Future research should focus on gathering more extensive annotated datasets and exploring additional linguistic features that may influence complexity perception. By doing so, we can further refine our models and develop more effective tools for lexical simplification and other applications aimed at improving text accessibility.
7
Lexical and Semantic Resources and Analysis
691_2024
2,024
Vittoria Tonini, Simona Frenda, Marco Antonio Stranisci, Viviana Patti
How do we counter dangerous speech in Italy?
ENG
4
3
1
Università di Torino, Aequa-tech, Heriot-Watt University
3
1
0
1
Simona Frenda
3
Vittoria Tonini, Simona Frenda, Marco Antonio Stranisci
Italy
Turin, Bari
The phenomenon of online dangerous speech is a growing challenge and various organisations try to prevent its spread answering promptly to hateful messages online. In this context, we propose a new dataset of activists’ and users’ comments on Facebook reacting to specific news headlines: AmnestyCounterHS. Taking into account the literature on counterspeech, we defined a newschema of annotation and applied it to our dataset, in order to examine the most used counter-narrative strategies in Italy. This research aims to support the future development of automatic counterspeech generation. This paper presents also a comparative analysis of our dataset with other two datasets in Italian (Counter-TWIT and multilingual CONAN) containing dangerous speech and counter narratives. Through this analysis, we will understand how the environment (artificial vs. ecological) and the topics of discussions online influence the nature of counter narratives. Our findings highlight the predominance of negative sentiment and emotions, the varying presence of stereotypes, and the strategic differences in counter narratives across datasets.
Recently, the attention about dangerous speech (DS) online has increased in different sectors, ranging from initiatives for monitoring the DS’ spread, particularly in Italy (e.g., by VOX1, or by researchers like Capozzi et al. [1]), to prevent the escalation of DS online using methods of detection and removal of dangerous contents (e.g., following the policies of social platforms). Moreover, specific actions of countering DS online, like the Amnesty Task Force on Hate Speech2, which reassembles specialized activists who actively intervene by writing counterspeech, were promoted3 in response to potential or effective dangerous speech or news on various topics. In this context, the new techniques of Natural Language Understanding (NLU) and Natural Language Generation (NLG) can play a very important role. On DS detection, the literature is vast [4, 5] and covers various nuances of DS [6, 7], different types of manifestation (i.e., explicit and implicit, [8]), and co-occurrences with other psychological and linguistic phenomena, like stereotypes [9] and sarcasm [10]. Regarding works on countering DS, some studies focused on imitating the operators of Non-Governmental Organizations (NGOs) in their intervention in online discussions, or selecting the most suitable responses from a database [11], or creating generative models able to reply automatically to hateful content using counter narratives (CN), avoiding hallucinations [12]. The development of NLU and NLG models are mainly based on data-driven approaches, which imply the creation of a specific dataset to detect DS or generate adequate CN. According to the survey by Bonaldi et al. [2], in literature, the available datasets in languages other than English are very few. Among them, currently, only two datasets contain Italian texts: CONAN [13] and Counter-TWIT [14]. The creation environment of CONAN is artificial (i.e., activists have been asked to write CN to specific hateful comments), and the one of Counter-TWIT is entirely ecological (i.e., collection of tweets written by users). In this scenario, in our work, we propose a new dataset, AmnestyCounterHS, which, differently from the existing ones, reflects the real action of activists online. Indeed, our dataset, compiled from Facebook, includes interactions guided by the Amnesty Task Force on Hate Speech (HS), representing an ecological and spontaneous context. Here, the intervention of counterspeech is guided by Amnesty International activists who decided to intervene under certain posts potentially dangerous spread by online newspapers or users (e.g., verbal attacks to women, immigrants, and so on). Moreover, inspired by existing strategy taxonomies [15, 13, 14], we mapped a more complete taxonomy inclusive of both existing and new strategies found in our dataset. This new resource allows us to analyze the used strategies of CN in the Italian language across different types of messages and contexts (CONAN, Counter-TWIT, AmnestyCounterHS). By comparing these datasets, we propose to examine: 1) which strategy of CN is the most used in the different contexts and discussions online; 2) what the differences are in terms of sentiments, emotions, and the presence of stereotypes, between potentially dangerous messages posted online and the counterspeech produced by activists/users in all the datasets. The importance of understanding how these strategies of CN are used relies on the need to raise social awareness about real events, the necessity to be correctly informed about facts (avoiding fake news), as well as to be conscious of the consequences of dangerous speech in the target groups [16].
In this paper, we examine the strategy of CN used in various contexts, looking at their characteristics and typology across different datasets in Italian: CONAN, Counter-TWIT, and AmnestyCounterHS. Thanks to this comparative analysis, we noticed that different environments and topics affect the type of strategy used by activists or users who want to counter DS [18]. One of the main points that we want to underline is the importance of the conversational context [19, 20, 21, 22]. In our dataset, AmnestyCounterHS, the annotators showed difficulties in understanding the position of the author of the message, without the entire conversational thread. For instance, let us consider this comment written under some news about COVID-19: "Infatti. Ampiamente dimostrato"26. Without the full conversation, it is challenging to determine whether this comment is supporting or contradicting an argument about COVID-19. Similarly, let us take a look at the comment: "Grande argomentazione, scuola di Demostene? #posailfiasco"27 written under this newstitle: "Un milione di profughi sono ostaggio di Erdogan"28. We can clearly see that the comment is ironic, but we cannot understand its stance on integration. For this reason, future developments in automatic counterspeech generation should focus on incorporating comprehensive conversational threads to enhance accuracy and relevance. This approach will be fundamental to create effective AI-driven counter-narrative systems.
6
Sentiment, Emotion, Irony, Hate
692_2024
2,024
Tiziano Labruna, Sofia Brenna, Giovanni Bonetta, Bernardo Magnini
Are you a Good Assistant? Assessing LLM Trustability in Task-oriented Dialogues
ENG
4
1
0
Libera Università di Bolzano, Fondazione Bruno Kessler
2
0
0
0
0
0
0
Italy
Bolzano, Trento
Despite the impressive capabilities of recent Large Language Models (LLMs) to generate human-like text, their ability to produce contextually appropriate content for specific communicative situations is still a matter of debate. This issue is particularly crucial when LLMs are employed as assistants to help solve tasks or achieve goals within a given conversational domain. In such scenarios, the assistant is expected to access specific knowledge (e.g., a database of restaurants, a calendar of appointments) that is not directly accessible to the user and must be consistently utilised to accomplish the task. In this paper, we conduct experiments to evaluate the trustworthiness of automatic assistants in task-oriented dialogues. Our findings indicate that state-of-the-art open-source LLMs still face significant challenges in maintaining logical consistency with a knowledge base of facts, highlighting the need for further advancements in this area.
Conversational assistants [1] are widely used to help human users achieve specific goals through dialogue. In a typical scenario (e.g., booking a restaurant, scheduling an appointment, selecting a song in a playlist, etc.), the assistant interprets the user’s goals, searches a database for relevant options, and provides the user with responses (e.g., a restaurant reservation, a new appointment in a calendar, a song playing on a smartphone). A key ability for an assistant is to maintain consistency between user requests and domain knowledge [2]. This is crucial because, in a typical setting, the user does not know the actual content of the database (e.g., all the restaurants in a city) and, as a consequence, cannot verify whether the assistant’s response is correct. While in traditional approaches [3], this consistency was ensured by a dedicated component responsible for retrieving information from a domain database, recent end-to-end approaches [4, 5] rely on a single LLM-based model for utterance understanding, domain knowledge retrieval, and response generation. In this setting, the LLM must generate responses that are as aligned with the database as possible. However, the ability of current end-to-end assistants to maintain consistency between the generated responses and the actual content of the domain knowledge is questionable (e.g., due to LLM confabulations), and there is a clear lack of empirical evidence on this crucial issue. To be more concrete, Figure 1 shows an example of an inconsistent dialogue with respect to the conversational knowledge base. Here, although there are two British restaurants in the knowledge base, the system (turn S1) informs the user that there are three British restaurants, providing incorrect information. This is an example of inconsistency generated by an LLM, which is the focus of this research. Our aim is to shed new light on the trustworthiness of an LLM playing the role of an assistant in a task-oriented conversational domain while interacting with a user. We aim to answer the following research questions: (i) How can we operationally define the consistency between a task-oriented dialogue and the domain database behind the dialogue? (ii) How can we quantify the degree of trustworthiness of an assistant-LLM? (iii) Can we collect empirical evidence on a sufficiently large amount of task-oriented dialogues? To address these research questions, we set up an experimental framework allowing large-scale analysis, where task-oriented dialogues are first automatically generated by two instances of a state-of-the-art LLM, LLama-3 8B [6], and then a more powerful LLM, GPT-4o [7], is used to detect potential inconsistencies between a dialogue and a corresponding domain knowledge base. We hope that new large-scale experimental data can be used to develop more reliable and effective task-oriented dialogue systems, ultimately enhancing the capabilities of conversational agents in various applications.
In this study, we explored the capabilities of state-of-the-art LLMs in generating task-oriented dialogues, focusing on maintaining consistency with a provided KB and avoiding hallucinations. Our experiments demonstrated that Llama-3, despite its advancements, struggles to perform reliably in these settings. The model showed significant limitations, especially in dialogues that led to failed outcomes (where the desired restaurant was not in the KB) and longer interactions. As a side contribution, we release *The Dining Llamas of Oz*, a corpus of 1,311 dialogues generated through user Llama and system Llama interactions, to aid future research. Our findings highlight the need for further development to improve LLM reliability and accuracy in task-oriented dialogue applications.
3
Chatbots and Dialogue Systems
693_2024
2,024
Kamyar Zeinalipour, Achille Fusco, Asya Zanollo, Marco Maggini, Marco Gori
Harnessing LLMs for Educational Content-Driven Italian Crossword Generation
ENG
5
1
0
Università di Siena, IUSS
2
0
0
0
0
0
0
Italy
Siena, Pavia
In this work, we unveil a novel tool for generating Italian crossword puzzles from text, utilizing advanced language models such as GPT-4o, Mistral-7B-Instruct-v0.3, and Llama3-8b-Instruct. Crafted specifically for educational applications, this cutting-edge generator makes use of the comprehensive Italian-Clue-Instruct dataset, which comprises over 30,000 entries including diverse text, solutions, and types of clues. This carefully assembled dataset is designed to facilitate the creation of contextually relevant clues in various styles associated with specific texts and keywords. The study delves into four distinctive styles of crossword clues: those without format constraints, those formed as definite determiner phrases, copular sentences, and bare noun phrases. Each style introduces unique linguistic structures to diversify clue presentation. Given the lack of sophisticated educational tools tailored to the Italian language, this project seeks to enhance learning experiences and cognitive development through an engaging, interactive platform. By meshing state-of-the-art AI with contemporary educational strategies, our tool can dynamically generate crossword puzzles from Italian educational materials, thereby providing an enjoyable and interactive learning environment. This technological advancement not only redefines educational paradigms but also sets a new benchmark for interactive and cognitive language learning solutions.
While traditionally valued for their challenge and entertainment, crossword puzzles are increasingly recognized for their educational benefits. They provide an interactive learning environment that enhances the retention of both technical terms and general language skills, hence facilitating learning across various disciplines, improving language acquisition, and supporting cognitive development, through critical thinking and memory retention [1, 2, 3, 4, 5, 6, 7, 3, 8, 9, 2, 10, 11]. The integration of Natural Language Processing (NLP) and Large Language Models (LLMs) has further enhanced their effectiveness by providing sophisticated, contextually relevant clues for educational crosswords. This paper presents a novel tool that uses LLMs to generate tailored Italian educational crossword puzzles from texts, offering various clue types. By integrating user-provided texts or keywords and applying fine-tuning techniques, the tool produces high-quality clues and answers, offering educators a resource to develop more interactive and effective instructional methods. Furthermore, a new dataset called 1 has been compiled and will be released to the scientific community. The layout of this paper is organized in the following manner: Section 2 surveys the relevant literature in detail. Section 3 explains the methods used for dataset collection and curation. In Section 3, we describe the computational techniques employed in our study. Section 4 reports the results derived from our experimental analysis. Finally, Section 5 closes with conclusive insights and the broader implications of our research findings.
A novel system for generating crossword clues from Italian text is introduced, leveraging the newly developed Italian-Clue-Instruct dataset. This dataset, which includes text, keywords, categories, and related crossword clues in Italian, is pioneering in this field. By fine-tuning two large language models (LLMs), Mistral-7B-Instruct-v0.3 and Llama3-8b-Instruct, using this dataset, we have achieved significant improvements in the models’ ability to generate crossword clues from given text. The results highlight a substantial enhancement in model performance after fine-tuning. Both the Italian-Clue-Instruct dataset and the fine-tuned models are now publicly available, providing valuable tools for students and teachers to create educational crossword puzzles from Italian text. Future research will aim to develop models capable of generating various types of crossword clues, including fill-in-the-blank clues.
1
Language Models