Next Article in Journal
Investigating the Effect of Cyclodextrin Nanosponges and Cyclodextrin-Based Hydrophilic Polymers on the Chemical Pharmaceutical and Toxicological Profile of Al(III) and Ga(III) Complexes with 5-Hydroxyflavone
Next Article in Special Issue
Transferring Sentiment Cross-Lingually within and across Same-Family Languages
Previous Article in Journal
Simulation of Full Wavefield Data with Deep Learning Approach for Delamination Identification
Previous Article in Special Issue
STOD: Towards Scalable Task-Oriented Dialogue System on MultiWOZ-API
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Evaluating Neural Networks’ Ability to Generalize against Adversarial Attacks in Cross-Lingual Settings

1
Department of Computer Science Engineering, Maharaja Surajmal Institute of Technology Affiliated to GGSIPU, New Delhi 110058, India
2
School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104, USA
3
Faculty of Logistics, Molde University College, Britvegen 2, 6410 Molde, Norway
*
Author to whom correspondence should be addressed.
Submission received: 17 May 2024 / Revised: 14 June 2024 / Accepted: 17 June 2024 / Published: 23 June 2024
(This article belongs to the Special Issue Natural Language Processing (NLP) and Applications—2nd Edition)

Featured Application

Featured Application: The application of this research is to create better multilingual datasets by utilizing the insights gained from our investigation into mBART and XLM-Roberta. These improved datasets will support the creation of more robust and accurate AI NLP models that can effectively handle various languages, enhancing performance in tasks like machine translation, sentiment analysis, text categorization, and information retrieval. This research addresses biases and limitations in the current translation methods.

Abstract

Cross-lingual transfer learning using multilingual models has shown promise for improving performance on natural language processing tasks with limited training data. However, translation can introduce superficial patterns that negatively impact model generalization. This paper evaluates two state-of-the-art multilingual models, Cross-Lingual Model-Robustly Optimized BERT Pretraining Approach (XLM-Roberta) and Multilingual Bi-directional Auto-Regressive Transformer (mBART), on the cross-lingual natural language inference (XNLI) natural language inference task using both original and machine-translated evaluation sets. Our analysis demonstrates that translation can facilitate cross-lingual transfer learning, but maintaining linguistic patterns is critical. The results provide insights into the strengths and limitations of state-of-the-art multilingual natural language processing architectures for cross-lingual understanding.
Keywords: cross-lingual NLP; multilingual models; adversarial attacks; XLM roberta; mBART cross-lingual NLP; multilingual models; adversarial attacks; XLM roberta; mBART

Share and Cite

MDPI and ACS Style

Mathur, V.; Dadu, T.; Aggarwal, S. Evaluating Neural Networks’ Ability to Generalize against Adversarial Attacks in Cross-Lingual Settings. Appl. Sci. 2024, 14, 5440. https://0-doi-org.brum.beds.ac.uk/10.3390/app14135440

AMA Style

Mathur V, Dadu T, Aggarwal S. Evaluating Neural Networks’ Ability to Generalize against Adversarial Attacks in Cross-Lingual Settings. Applied Sciences. 2024; 14(13):5440. https://0-doi-org.brum.beds.ac.uk/10.3390/app14135440

Chicago/Turabian Style

Mathur, Vidhu, Tanvi Dadu, and Swati Aggarwal. 2024. "Evaluating Neural Networks’ Ability to Generalize against Adversarial Attacks in Cross-Lingual Settings" Applied Sciences 14, no. 13: 5440. https://0-doi-org.brum.beds.ac.uk/10.3390/app14135440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop