Innovative Solutions for Pervasive Sentiment Analysis

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 5155

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical, Electronic, Telecommunications Engineering, and Naval Architecture, University of Genoa, 16145 Genoa, Italy
Interests: artificial intelligence and machine learning; embedded systems; embedded machine learning; convolutional neural networks; extreme learning machine; sentiment analysis

grade E-Mail Website
Guest Editor
School of Computer Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798, Singapore
Interests: sentiment analysis; opinion mining; recommendation systems; commonsense reasoning; dialogue systems; sarcasm detection; personality recognition; natural language-based financial advice

Special Issue Information

Dear Colleagues,

Pervasive electronics provide an enabling technology supporting intelligent processing systems. The availability of distributed sensing and smart devices is changing the way we approach problems. Sentiment analysis is a landmark example due to its inherent complexity. Researchers traditionally approached this problem using text data; now, we have text, images, videos, audio information, biomedical data, information about the user's position, and data from smart devices. This information can boost sentiment analysis tools, opening up new scenarios.

The effective performance of sentiment analysis depends on two components: a sensing system that can provide accurate information about the user's state of mind, and an intelligent processing system that can utilize such information. The latter aspect can be addressed by machine learning (ML), which can mine information from data at the expense of intensive computation. In particular, deep learning automates the feature-extraction process. Then, new sources of information provided by sensors can be exploited with limited human intervention. However, the amount of data is such that server-based solutions are not enough. Distributed computing solutions overcome the limitation imposed by the server-based computing paradigm by splitting the computational load into nodes that range from servers to embedded systems. The development of this eld necessarily passes through the use of software–hardware co-design.

Using embedded systems to process new data sources is indeed becoming a requirement for building the next generation of sentiment analysis systems. On the other hand, given the constraints imposed by embedded devices in terms of power consumption, latency, size, and cost, the deployment of an ML model on an embedded system poses major challenges. The main goal is to prompt frame-efficient inference functions that can run on resource-constrained edge devices. Under such a paradigm, training might, in principle, be demanded for a different, more powerful platforms. A more demanding goal is to be able to complete training on resource-constrained devices.

This Special Issue will focus on software and hardware models and methodologies for sentiment analysis. The aim is to collect the most recent advances in machine learning research for sentiment analysis. Accordingly, the Special Issue welcomes methods and ideas that emphasize the impact of embedded machine learning and novel sensor sources on sentiment analysis technologies and the use of new sensing methods to detect the human state of mind.

The topics of interest for this Special Issue include, but are not limited to:

  • Software/hardware techniques for sentiment analysis;
  • Sentiment analysis using IoT data;
  • Embedded machine learning;
  • Low-power inference engines;
  • Intelligent sensors;
  • Online learning on resource-constrained edge devices;
  • Power-efficient machine learning implementations on embedded devices;
  • The on-chip training of deep neural networks;
  • High-performance, low-power computing for deep learning and computer vision;
  • High-performance, low-power computing for deep-learning-based audio and speech processing;
  • Machine learning for sentiment-aware autonomous systems.

Dr. Edoardo Ragusa
Prof. Erik Cambria
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Software/hardware techniques for sentiment analysis
  • Sentiment analysis using IoT data
  • Embedded machine learning
  • Low-power inference engines
  • Intelligent sensors
  • Online learning on resource-constrained edge devices
  • Power-efficient machine learning implementations on embedded devices
  • The on-chip training of deep neural networks
  • High-performance, low-power computing for deep learning and computer vision
  • High-performance, low-power computing for deep-learning-based audio and speech processing
  • Machine learning for sentiment-aware autonomous systems

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 885 KiB  
Article
TF-TDA: A Novel Supervised Term Weighting Scheme for Sentiment Analysis
by Arwa Alshehri and Abdulmohsen Algarni
Electronics 2023, 12(7), 1632; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12071632 - 30 Mar 2023
Viewed by 2001
Abstract
In text classification tasks, such as sentiment analysis (SA), feature representation and weighting schemes play a crucial role in classification performance. Traditional term weighting schemes depend on the term frequency within the entire document collection; therefore, they are called unsupervised term weighting (UTW) [...] Read more.
In text classification tasks, such as sentiment analysis (SA), feature representation and weighting schemes play a crucial role in classification performance. Traditional term weighting schemes depend on the term frequency within the entire document collection; therefore, they are called unsupervised term weighting (UTW) schemes. One of the most popular UTW schemes is term frequency–inverse document frequency (TF-IDF); however, this is not sufficient for SA tasks. Newer weighting schemes have been developed to take advantage of the membership of documents in their categories. These are called supervised term weighting (STW) schemes; however, most of them weigh the extracted features without considering the characteristics of some noisy features and data imbalances. Therefore, in this study, a novel STW approach was proposed, known as term frequency–term discrimination ability (TF-TDA). TF-TDA mainly presents the extracted features with different degrees of discrimination by categorizing them into several groups. Subsequently, each group is weighted based on its contribution. The proposed method was examined over four SA datasets using naive Bayes (NB) and support vector machine (SVM) models. The experimental results proved the superiority of TF-TDA over two baseline term weighting approaches, with improvements ranging from 0.52% to 3.99% in the F1 score. The statistical test results verified the significant improvement obtained by TF-TDA in most cases, where the p-value ranged from 0.0000597 to 0.0455. Full article
(This article belongs to the Special Issue Innovative Solutions for Pervasive Sentiment Analysis)
Show Figures

Figure 1

16 pages, 2764 KiB  
Article
Speech Emotion Recognition Using Audio Matching
by Iti Chaturvedi, Tim Noel and Ranjan Satapathy
Electronics 2022, 11(23), 3943; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11233943 - 29 Nov 2022
Cited by 3 | Viewed by 2025
Abstract
It has become popular for people to share their opinions about products on TikTok and YouTube. Automatic sentiment extraction on a particular product can assist users in making buying decisions. For videos in languages such as Spanish, the tone of voice can be [...] Read more.
It has become popular for people to share their opinions about products on TikTok and YouTube. Automatic sentiment extraction on a particular product can assist users in making buying decisions. For videos in languages such as Spanish, the tone of voice can be used to determine sentiments, since the translation is often unknown. In this paper, we propose a novel algorithm to classify sentiments in speech in the presence of environmental noise. Traditional models rely on pretrained audio feature extractors for humans that do not generalize well across different accents. In this paper, we leverage the vector space of emotional concepts where words with similar meanings often have the same prefix. For example, words starting with ‘con’ or ‘ab’ signify absence and hence negative sentiments. Augmentations are a popular way to amplify the training data during audio classification. However, some augmentations may result in a loss of accuracy. Hence, we propose a new metric based on eigenvalues to select the best augmentations. We evaluate the proposed approach on emotions in YouTube videos and outperform baselines in the range of 10–20%. Each neuron learns words with similar pronunciations and emotions. We also use the model to determine the presence of birds from audio recordings in the city. Full article
(This article belongs to the Special Issue Innovative Solutions for Pervasive Sentiment Analysis)
Show Figures

Figure 1

Back to TopTop