Computational Trust and Reputation Models

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (25 May 2022) | Viewed by 11488

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronic Engineering, Polytechnic University of Madrid, Madrid, Spain
Interests: artificial intelligence; machine learning; multimedia processing and retrieval; speech technology; affective computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronic Engineering, Polytechnic University of Madrid, Madrid, Spain
Interests: human–machine interaction (HMI); speech synthesis; speech recognition; affective computing; machine learning; data science

E-Mail Website
Guest Editor
Department of Signal Theory and Communications, University Carlos III de Madrid, Leganés, Spain
Interests: speech and audio processing; audio segmentation and classification; auditory saliency; multimodal attention models; speech-based health applications

Special Issue Information

Dear Colleagues,

Trust is essential in many interdependent human relationships. The decision to trust others is often made quickly. Previous research has shown the significance of different characteristics, such as voice or attractiveness, in perceived trustworthiness.

In recent years, computational trust and reputation models have been clearly demonstrated to be an invaluable asset for improving computer–computer and human–computer interaction. 

This Special Issue focuses on novel approaches for identifying and tracking signals of trustworthiness from different modalities (facial expressions, gestures, gaze, voice, conversational features, etc.) or fusing them into multimodal computational trust and reputation models.

Trust and reputation are key issues for technology design and development in many different domains, hence application areas include but are not limited to:

  • Human–computer interaction (embodied conversational agents or chatbots);
  • Intelligent systems (content-based multimedia indexing and retrieval, content-based recommender systems, business intelligence analytics, etc.);
  • E-Learning (intelligent agents for student or teacher assistance, monitoring student emotional feedback, student performance and engagement prediction, etc.);
  • E-Health (patient/elderly home monitoring, mental health monitoring, etc.).

Original research papers concerned with both theoretical and applied aspects of computational trust and reputation models and analysis are welcome. Review articles describing the current state-of-the-art of multimodal trust and reputation computational models are also highly encouraged, including overviews of data and computational resources available.

Possible topics include but are not limited to the following:

  • Machine and deep learning algorithms for trust and reputation modelling.
  • Theoretical aspects of multimodal trust and reputation models.
  • Combination and fusion of modalities for trust prediction.
  • Trust and reputation prediction in the wild.
  • Data and resources for multimodal trust and reputation computational models.
  • Deception and sincerity: analysis, detection and synthesis.
  • Affective computing: multimodal behavior, action, emotion, or stance recognition; sentiment analysis and opinion mining.
  • Multimodal dialogue systems; question answering and chatbot development; intelligent agents; natural language generation; speech synthesis and recognition.
  • Multimodal dialogue analysis; discourse analysis; text and speech analysis.
  • Deep-learning-based video, image, speech, and audio processing.

All submitted papers will undergo our standard peer-review procedure. Accepted papers will be published in open-access format in Applied Sciences and collected together on the Special Issue website. Authors are encouraged to submit contributions on any related areas.

Dr. Fernando Fernández-Martínez
Dr. Juan Manuel Montero Martínez
Dr. Ascension Gallardo Antolín
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • trustworthiness
  • reputation
  • emotion
  • personality
  • stance
  • deception
  • affective computing
  • artificial intelligence
  • machine learning
  • multimodal signal processing
  • multimedia pattern recognition
  • multimodal fusion

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 8043 KiB  
Article
Topic-Oriented Text Features Can Match Visual Deep Models of Video Memorability
by Ricardo Kleinlein, Cristina Luna-Jiménez, David Arias-Cuadrado, Javier Ferreiros and Fernando Fernández-Martínez
Appl. Sci. 2021, 11(16), 7406; https://0-doi-org.brum.beds.ac.uk/10.3390/app11167406 - 12 Aug 2021
Cited by 7 | Viewed by 1828
Abstract
Not every visual media production is equally retained in memory. Recent studies have shown that the elements of an image, as well as their mutual semantic dependencies, provide a strong clue as to whether a video clip will be recalled on a second [...] Read more.
Not every visual media production is equally retained in memory. Recent studies have shown that the elements of an image, as well as their mutual semantic dependencies, provide a strong clue as to whether a video clip will be recalled on a second viewing or not. We believe that short textual descriptions encapsulate most of these relationships among the elements of a video, and thus they represent a rich yet concise source of information to tackle the problem of media memorability prediction. In this paper, we deepen the study of short captions as a means to convey in natural language the visual semantics of a video. We propose to use vector embeddings from a pretrained SBERT topic detection model with no adaptation as input features to a linear regression model, showing that, from such a representation, simpler algorithms can outperform deep visual models. Our results suggest that text descriptions expressed in natural language might be effective in embodying the visual semantics required to model video memorability. Full article
(This article belongs to the Special Issue Computational Trust and Reputation Models)
Show Figures

Figure 1

25 pages, 13308 KiB  
Article
Guided Spatial Transformers for Facial Expression Recognition
by Cristina Luna-Jiménez, Jorge Cristóbal-Martín, Ricardo Kleinlein, Manuel Gil-Martín, José M. Moya and Fernando Fernández-Martínez
Appl. Sci. 2021, 11(16), 7217; https://0-doi-org.brum.beds.ac.uk/10.3390/app11167217 - 05 Aug 2021
Cited by 11 | Viewed by 3000
Abstract
Spatial Transformer Networks are considered a powerful algorithm to learn the main areas of an image, but still, they could be more efficient by receiving images with embedded expert knowledge. This paper aims to improve the performance of conventional Spatial Transformers when applied [...] Read more.
Spatial Transformer Networks are considered a powerful algorithm to learn the main areas of an image, but still, they could be more efficient by receiving images with embedded expert knowledge. This paper aims to improve the performance of conventional Spatial Transformers when applied to Facial Expression Recognition. Based on the Spatial Transformers’ capacity of spatial manipulation within networks, we propose different extensions to these models where effective attentional regions are captured employing facial landmarks or facial visual saliency maps. This specific attentional information is then hardcoded to guide the Spatial Transformers to learn the spatial transformations that best fit the proposed regions for better recognition results. For this study, we use two datasets: AffectNet and FER-2013. For AffectNet, we achieve a 0.35% point absolute improvement relative to the traditional Spatial Transformer, whereas for FER-2013, our solution gets an increase of 1.49% when models are fine-tuned with the Affectnet pre-trained weights. Full article
(This article belongs to the Special Issue Computational Trust and Reputation Models)
Show Figures

Figure 1

16 pages, 1085 KiB  
Article
Detecting Deception from Gaze and Speech Using a Multimodal Attention LSTM-Based Framework
by Ascensión Gallardo-Antolín and Juan M. Montero
Appl. Sci. 2021, 11(14), 6393; https://0-doi-org.brum.beds.ac.uk/10.3390/app11146393 - 11 Jul 2021
Cited by 14 | Viewed by 2799
Abstract
The automatic detection of deceptive behaviors has recently attracted the attention of the research community due to the variety of areas where it can play a crucial role, such as security or criminology. This work is focused on the development of an automatic [...] Read more.
The automatic detection of deceptive behaviors has recently attracted the attention of the research community due to the variety of areas where it can play a crucial role, such as security or criminology. This work is focused on the development of an automatic deception detection system based on gaze and speech features. The first contribution of our research on this topic is the use of attention Long Short-Term Memory (LSTM) networks for single-modal systems with frame-level features as input. In the second contribution, we propose a multimodal system that combines the gaze and speech modalities into the LSTM architecture using two different combination strategies: Late Fusion and Attention-Pooling Fusion. The proposed models are evaluated over the Bag-of-Lies dataset, a multimodal database recorded in real conditions. On the one hand, results show that attentional LSTM networks are able to adequately model the gaze and speech feature sequences, outperforming a reference Support Vector Machine (SVM)-based system with compact features. On the other hand, both combination strategies produce better results than the single-modal systems and the multimodal reference system, suggesting that gaze and speech modalities carry complementary information for the task of deception detection that can be effectively exploited by using LSTMs. Full article
(This article belongs to the Special Issue Computational Trust and Reputation Models)
Show Figures

Figure 1

15 pages, 1367 KiB  
Article
Paragraph Boundary Recognition in Novels for Story Understanding
by Riku Iikura, Makoto Okada and Naoki Mori
Appl. Sci. 2021, 11(12), 5632; https://0-doi-org.brum.beds.ac.uk/10.3390/app11125632 - 18 Jun 2021
Viewed by 2618
Abstract
The understanding of narrative stories by computer is an important task for their automatic generation. To date, high-performance neural-network technologies such as BERT have been applied to tasks such as the Story Cloze Test and Story Completion. In this study, we focus on [...] Read more.
The understanding of narrative stories by computer is an important task for their automatic generation. To date, high-performance neural-network technologies such as BERT have been applied to tasks such as the Story Cloze Test and Story Completion. In this study, we focus on the text segmentation of novels into paragraphs, which is an important writing technique for readers to deepen their understanding of the texts. This type of segmentation, which we call “paragraph boundary recognition”, can be considered to be a binary classification problem in terms of the presence or absence of a boundary, such as a paragraph between target sentences. However, in this case, the data imbalance becomes a bottleneck because the number of paragraphs is generally smaller than the number of sentences. To deal with this problem, we introduced several cost-sensitive loss functions, namely. focal loss, dice loss, and anchor loss, which were robust for imbalanced classification in BERT. In addition, introducing the threshold-moving technique into the model was effective in estimating paragraph boundaries. As a result of the experiment on three newly created datasets, BERT with dice loss and threshold moving obtained a higher F1 than the original BERT had using cross-entropy loss as its loss function (76% to 80%, 50% to 54%, 59% to 63%). Full article
(This article belongs to the Special Issue Computational Trust and Reputation Models)
Show Figures

Figure 1

Back to TopTop