Security, Privacy, and Trust in Artificial Intelligence Applications

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: 31 December 2024 | Viewed by 3915

Special Issue Editor


E-Mail Website
Guest Editor
Department of Psychology, University of Milano Bicocca, 20126 Milano, Italy
Interests: trust and reputation systems; internet of things; distributed artificial intelligence; intelligent transportation systems; multiagent systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence has become increasingly sophisticated in recent years, and its applications are being proposed in growing numbers to solve all kinds of problems and support any user needs. In light of this, it is logical to assume that artificial intelligence is set to change the world in the coming years in profound ways, changing the production of goods and services in the fields of science, engineering, industry, medicine, robotics, manufacturing, entertainment, optimization, business, and other relevant and less relevant areas of interest, and similarly, how these goods and services are used. As a result, the contribution of AI to global economic development will become increasingly important. Therefore, AI applications must meet security, privacy, and trust requirements, realizing a comfortable development and usage environment.

In this context, this Special Issue is dedicated to security, privacy, and trust in Artificial Intelligence (AI) applications, aiming to bring together the latest views proposed in the literature, the latest research efforts, and contributions from practitioners and other interested parties to advance and improve state-of-the-art practices for building AI applications with advanced security, privacy, and trustworthy features by developing new security, trust and reputation models, architectures, protocols, and standards. We invite researchers and practitioners to contribute their high-quality original research or review articles on these topics to this Special Issue.

In particular, it is crucial to propose architectures, protocols, practical applications and use cases, and perform risk analysis to understand the threat landscape. We are also interested in addressing the convergence of AI with, but not limited to, software agents, IoT systems, smart cities, edge, and cloud computing that leverage algorithms that combine trustworthiness and reputation information with privacy and security mechanisms. We also solicit contributions focusing on experimental campaigns that use simulated frameworks to evaluate strategies for improving security, privacy, and trustworthiness, preventing and deterring deceptive behavior.

The topics of interest include, but are not limited to, the following issues on security, privacy, trust, and reputation:

  • AI-based architecture, models, and protocols;
  • AI-based applications (including but not limited to big data, digital healthcare, green computing, Industry 4.0, intelligent transportation systems, IoT, mobile computing, smart cities, smart factories, supply chains, wireless devices, etc.);
  • Energy-aware management with AI;
  • Fog/Edge/Cloud AI-based systems and applications for guaranteeing security/privacy/trust and reputation (including but not limited to authentication and access control, availability and auditing, data security and privacy, formal methods, key management, lightweight cryptography, malware detection, protocol security, vulnerability analysis, etc.).
  • Models, protocols, algorithms, and approaches for security/privacy/trust and reputation;
  • Robotic applications;
  • Security mechanisms for embedded IoT devices (including but not limited to accountability, firmware security, integrity verification, malware protection, remote authentication, etc.);
  • Services for AI platforms

Prof. Dr. Giuseppe Maria Luigi Sarnè
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • security
  • privacy
  • trust and reputation
  • Internet of Things
  • smart objects
  • software agents
  • intelligent systems

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 1867 KiB  
Article
Experimental Evaluation: Can Humans Recognise Social Media Bots?
by Maxim Kolomeets, Olga Tushkanova, Vasily Desnitsky, Lidia Vitkova and Andrey Chechulin
Big Data Cogn. Comput. 2024, 8(3), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc8030024 - 26 Feb 2024
Viewed by 1174
Abstract
This paper aims to test the hypothesis that the quality of social media bot detection systems based on supervised machine learning may not be as accurate as researchers claim, given that bots have become increasingly sophisticated, making it difficult for human annotators to [...] Read more.
This paper aims to test the hypothesis that the quality of social media bot detection systems based on supervised machine learning may not be as accurate as researchers claim, given that bots have become increasingly sophisticated, making it difficult for human annotators to detect them better than random selection. As a result, obtaining a ground-truth dataset with human annotation is not possible, which leads to supervised machine-learning models inheriting annotation errors. To test this hypothesis, we conducted an experiment where humans were tasked with recognizing malicious bots on the VKontakte social network. We then compared the “human” answers with the “ground-truth” bot labels (‘a bot’/‘not a bot’). Based on the experiment, we evaluated the bot detection efficiency of annotators in three scenarios typical for cybersecurity but differing in their detection difficulty as follows: (1) detection among random accounts, (2) detection among accounts of a social network ‘community’, and (3) detection among verified accounts. The study showed that humans could only detect simple bots in all three scenarios but could not detect more sophisticated ones (p-value = 0.05). The study also evaluates the limits of hypothetical and existing bot detection systems that leverage non-expert-labelled datasets as follows: the balanced accuracy of such systems can drop to 0.5 and lower, depending on bot complexity and detection scenario. The paper also describes the experiment design, collected datasets, statistical evaluation, and machine learning accuracy measures applied to support the results. In the discussion, we raise the question of using human labelling in bot detection systems and its potential cybersecurity issues. We also provide open access to the datasets used, experiment results, and software code for evaluating statistical and machine learning accuracy metrics used in this paper on GitHub. Full article
(This article belongs to the Special Issue Security, Privacy, and Trust in Artificial Intelligence Applications)
Show Figures

Figure 1

23 pages, 1527 KiB  
Article
Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW
by William Villegas-Ch, Angel Jaramillo-Alcázar and Sergio Luján-Mora
Big Data Cogn. Comput. 2024, 8(1), 8; https://0-doi-org.brum.beds.ac.uk/10.3390/bdcc8010008 - 16 Jan 2024
Viewed by 2109
Abstract
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images [...] Read more.
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model’s classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model’s vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method. Full article
(This article belongs to the Special Issue Security, Privacy, and Trust in Artificial Intelligence Applications)
Show Figures

Figure 1

Back to TopTop