sensors-logo

Journal Browser

Journal Browser

Human-Computer Interaction in Pervasive Computing Environments

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Intelligent Sensors".

Viewed by 57531

Editor

Department of Computer Science and Information Engineering, Asia University, Taichung 41354, Taiwan
Interests: information security; cyber physical systems; cloud computing; blockchain technologies; intrusion detection; artificial intelligence; social media and networking
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Sensors are widely used in everyday life today. They are present in a wide variety of areas, offering an excellent opportunity to face challenges related to medicine and healthcare, smart cities, smart homes, smart learning, and entertainement, among others. Sensors bring technology closer to humans in an increasingly transparent and natural approach, building genuine technological ecosystems in which human–computer interaction plays a key role.

Despite the penetration of the sensors, though, it is necessary to continue improving their design, implementation, and use to improve usability, accessibility, and user experience in smart environments.

The aim of this Topical Collection is to highlight recent advances and trends in human–computer interaction in pervasive computing environments. It will address a broad range of topics related to smart environments, including (but not limited) to the following:

  • Usability, accessibility, and sustainability;
  • User experience;
  • Natural user interfaces;
  • Sensors networks;
  • Haptic computing;
  • Ambient-assisted living;
  • Healthcare environments;
  • Smart cities design;
  • Multimodal systems and interfaces;
  • IoT dashboards and platforms;
  • Technological ecosystems for smart enviroments;
  • Service-oriented information visualization;
  • Smart interfaces for learning;
  • Ambient and pervasive interactions.

Dr. Brij B. Gupta
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart environment
  • smart spaces
  • smart cities
  • smart interfaces for learning
  • human–computer interaction
  • usability, accessibility and sustainability
  • user experience
  • natural user interfaces
  • sensors networks
  • haptic computing
  • ambient assisted living
  • healthcare environments
  • multimodal systems
  • multimodal interfaces
  • IoT dashboards
  • IoT platforms
  • technological ecosystems
  • service-oriented information visualization
  • ambient and pervasive interactions

Published Papers (20 papers)

2023

Jump to: 2022, 2021, 2020

25 pages, 5199 KiB  
Article
Crop Disease Identification by Fusing Multiscale Convolution and Vision Transformer
by Dingju Zhu, Jianbin Tan, Chao Wu, KaiLeung Yung and Andrew W. H. Ip
Sensors 2023, 23(13), 6015; https://0-doi-org.brum.beds.ac.uk/10.3390/s23136015 - 29 Jun 2023
Cited by 1 | Viewed by 1185
Abstract
With the development of smart agriculture, deep learning is playing an increasingly important role in crop disease recognition. The existing crop disease recognition models are mainly based on convolutional neural networks (CNN). Although traditional CNN models have excellent performance in modeling local relationships, [...] Read more.
With the development of smart agriculture, deep learning is playing an increasingly important role in crop disease recognition. The existing crop disease recognition models are mainly based on convolutional neural networks (CNN). Although traditional CNN models have excellent performance in modeling local relationships, it is difficult to extract global features. This study combines the advantages of CNN in extracting local disease information and vision transformer in obtaining global receptive fields to design a hybrid model called MSCVT. The model incorporates the multiscale self-attention module, which combines multiscale convolution and self-attention mechanisms and enables the fusion of local and global features at both the shallow and deep levels of the model. In addition, the model uses the inverted residual block to replace normal convolution to maintain a low number of parameters. To verify the validity and adaptability of MSCVT in the crop disease dataset, experiments were conducted in the PlantVillage dataset and the Apple Leaf Pathology dataset, and obtained results with recognition accuracies of 99.86% and 97.50%, respectively. In comparison with other CNN models, the proposed model achieved advanced performance in both cases. The experimental results show that MSCVT can obtain high recognition accuracy in crop disease recognition and shows excellent adaptability in multidisease recognition and small-scale disease recognition. Full article
Show Figures

Figure 1

35 pages, 4437 KiB  
Article
Co-Design Dedicated System for Efficient Object Tracking Using Swarm Intelligence-Oriented Search Strategies
by Nadia Nedjah, Alexandre V. Cardoso, Yuri M. Tavares, Luiza de Macedo Mourelle, Brij Booshan Gupta and Varsha Arya
Sensors 2023, 23(13), 5881; https://0-doi-org.brum.beds.ac.uk/10.3390/s23135881 - 25 Jun 2023
Cited by 1 | Viewed by 956
Abstract
The template matching technique is one of the most applied methods to find patterns in images, in which a reduced-size image, called a target, is searched within another image that represents the overall environment. In this work, template matching is used via a [...] Read more.
The template matching technique is one of the most applied methods to find patterns in images, in which a reduced-size image, called a target, is searched within another image that represents the overall environment. In this work, template matching is used via a co-design system. A hardware coprocessor is designed for the computationally demanding step of template matching, which is the calculation of the normalized cross-correlation coefficient. This computation allows invariance in the global brightness changes in the images, but it is computationally more expensive when using images of larger dimensions, or even sets of images. Furthermore, we investigate the performance of six different swarm intelligence techniques aiming to accelerate the target search process. To evaluate the proposed design, the processing time, the number of iterations, and the success rate were compared. The results show that it is possible to obtain approaches capable of processing video images at 30 frames per second with an acceptable average success rate for detecting the tracked target. The search strategies based on PSO, ABC, FFA, and CS are able to meet the processing time of 30 frame/s, yielding average accuracy rates above 80% for the pipelined co-design implementation. However, FWA, EHO, and BFOA could not achieve the required timing restriction, and they achieved an acceptance rate around 60%. Among all the investigated search strategies, the PSO provides the best performance, yielding an average processing time of 16.22 ms coupled with a 95% success rate. Full article
Show Figures

Figure 1

24 pages, 8788 KiB  
Review
Graph Visualization: Alternative Models Inspired by Bioinformatics
by Maxim Kolomeets, Vasily Desnitsky, Igor Kotenko and Andrey Chechulin
Sensors 2023, 23(7), 3747; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073747 - 04 Apr 2023
Cited by 1 | Viewed by 2086
Abstract
Currently, the methods and means of human–machine interaction and visualization as its integral part are being increasingly developed. In various fields of scientific knowledge and technology, there is a need to find and select the most effective visualization models for various types of [...] Read more.
Currently, the methods and means of human–machine interaction and visualization as its integral part are being increasingly developed. In various fields of scientific knowledge and technology, there is a need to find and select the most effective visualization models for various types of data, as well as to develop automation tools for the process of choosing the best visualization model for a specific case. There are many data visualization tools in various application fields, but at the same time, the main difficulty lies in presenting data of an interconnected (node-link) structure, i.e., networks. Typically, a lot of software means use graphs as the most straightforward and versatile models. To facilitate visual analysis, researchers are developing ways to arrange graph elements to make comparing, searching, and navigating data easier. However, in addition to graphs, there are many other visualization models that are less versatile but have the potential to expand the capabilities of the analyst and provide alternative solutions. In this work, we collected a variety of visualization models, which we call alternative models, to demonstrate how different concepts of information representation can be realized. We believe that adapting these models to improve the means of human–machine interaction will help analysts make significant progress in solving the problems researchers face when working with graphs. Full article
Show Figures

Figure 1

28 pages, 31647 KiB  
Article
Pactolo Bar: An Approach to Mitigate the Midas Touch Problem in Non-Conventional Interaction
by Alexandre Freitas, Diego Santos, Rodrigo Lima, Carlos Gustavo Santos and Bianchi Meiguins
Sensors 2023, 23(4), 2110; https://0-doi-org.brum.beds.ac.uk/10.3390/s23042110 - 13 Feb 2023
Viewed by 1587
Abstract
New ways of interacting with computers is driving research, which is motivated mainly by the different types of user profiles. Referred to as non-conventional interactions, these are found with the use of hands, voice, head, mouth, and feet, etc. and these interactions occur [...] Read more.
New ways of interacting with computers is driving research, which is motivated mainly by the different types of user profiles. Referred to as non-conventional interactions, these are found with the use of hands, voice, head, mouth, and feet, etc. and these interactions occur in scenarios where the use of mouse and keyboard would be difficult. A constant challenge in the adoption of new forms of interaction, based on the movement of pointers and the selection of interface components, is the Midas Touch (MT) problem, defined as the involuntary action of selection by the user when interacting with the computer system, causing unwanted actions and harming the user experience during the usage process. Thus, this article aims to mitigate the TM problem in interaction with web pages using a solution centered on the Head Tracking (HT) technique. For this purpose, a component in the form of a Bar was developed and inserted on the left side of the web page, called the Pactolo Bar (PB), in order to enable or disable the clicking event during the interaction process. As a way of analyzing the effectiveness of PB in relation to TM, two stages of tests were carried out based on the collaboration of voluntary participants. The first step aims to find the data that would lead to the best configuration of the BP, while the second step aims to carry out a comparative analysis between the PB solution and the eViacam software, whose use is also focused on the HT technique. The results obtained from the use of PB were considered promising, since the analysis of quantitative data points to a significant prevention of involuntary clicks in the iteration interface and the analysis of qualitative data showed the development of a better user experience due to the ease of use, which can be noticed in elements such as the PB size, the triggering mechanism, and its positioning in the graphical interface. This study benefits in the context of the user experience, because, when using non-conventional interactions, basic items such as aspects of the graphic elements, and interaction events raise new studies that seek to mitigate the problem of the Midas Touch. Full article
Show Figures

Figure 1

2022

Jump to: 2023, 2021, 2020

13 pages, 3963 KiB  
Communication
Method for Recognizing Pressing Position and Shear Force Using Active Acoustic Sensing on Gel Plates
by Hiroki Watanabe, Kaito Sasaki, Tsutomu Terada and Masahiko Tsukamoto
Sensors 2022, 22(24), 9951; https://0-doi-org.brum.beds.ac.uk/10.3390/s22249951 - 16 Dec 2022
Viewed by 1309
Abstract
A touch interface is an important technology used in many devices, including touch panels in smartphones. Many touch panels only detect the contact position. If devices can detect shear force in addition to the contact position, various touch interactions are possible. We propose [...] Read more.
A touch interface is an important technology used in many devices, including touch panels in smartphones. Many touch panels only detect the contact position. If devices can detect shear force in addition to the contact position, various touch interactions are possible. We propose a two-step recognition method for recognizing the pressing position and shear force using active acoustic sensing, which transmits acoustic signals to an object and recognizes the state of the object by analyzing its response. Specifically, we attach a contact speaker transmitting an ultrasonic sweep signal and a contact microphone receiving ultrasonic waves to a plate of gel. The propagation characteristics of ultrasonic waves differ due to changes in the shape of the gel caused by the user’s actions on the gel. This system recognizes the pressing position and shear force on the basis of the difference in acoustic characteristics. An evaluation of our method involving a user-independent model confirmed that four pressing positions were recognized with an F1 score of 85.4%, and four shear-force directions were recognized with an F1 score of 69.4%. Full article
Show Figures

Figure 1

22 pages, 42226 KiB  
Article
The Hybrid Stylus: A Multi-Surface Active Stylus for Interacting with and Handwriting on Paper, Tabletop Display or Both
by Cuauhtli Campos, Jakub Sandak, Matjaž Kljun and Klen Čopič Pucihar
Sensors 2022, 22(18), 7058; https://0-doi-org.brum.beds.ac.uk/10.3390/s22187058 - 18 Sep 2022
Cited by 6 | Viewed by 3063
Abstract
The distinct properties and affordances of paper provide benefits that enabled paper to maintain an important role in the digital age. This is so much so, that some pen–paper interaction has been imitated in the digital world with touchscreens and stylus pens. Because [...] Read more.
The distinct properties and affordances of paper provide benefits that enabled paper to maintain an important role in the digital age. This is so much so, that some pen–paper interaction has been imitated in the digital world with touchscreens and stylus pens. Because digital medium also provides several advantages not available to physical paper, there is a clear benefit to merge the two mediums. Despite the plethora of concepts, prototypes and systems to digitise handwritten information on paper, these systems require specially prepared paper, complex setups and software, which can be used solely in combination with paper, and, most importantly, do not support the concurrent precise interaction with both mediums (paper and touchscreen) using one pen only. In this paper, we present the design, fabrication and evaluation of the Hybrid Stylus. The Hybrid Stylus is assembled with the infinity pencil tip (nib) made of graphite and a specially designed shielded tip holder that is attached to an active stylus. The stylus can be used for writing on a physical paper, while it still maintains all the features needed for tablet interaction. Moreover, the stylus also allows simultaneous digitisation of handwritten information on the paper when the paper is placed on the tablet screen. In order to evaluate the concept, we also add a user-friendly manual alignment of paper position on the underlying tablet computer The evaluation demonstrates that the system achieves almost perfect digitisation of strokes (98.6% of strokes were correctly registered with only 1.2% of ghost strokes) whilst maintaining excellent user experience of writing with a pencil on the paper. Full article
Show Figures

Figure 1

11 pages, 2244 KiB  
Article
Electrophysiological Features to Aid in the Construction of Predictive Models of Human–Agent Collaboration in Smart Environments
by Dor Mizrahi, Inon Zuckerman and Ilan Laufer
Sensors 2022, 22(17), 6526; https://0-doi-org.brum.beds.ac.uk/10.3390/s22176526 - 30 Aug 2022
Cited by 3 | Viewed by 946
Abstract
Achieving successful human–agent collaboration in the context of smart environments requires the modeling of human behavior for predicting people’s decisions. The goal of the current study was to utilize the TBR and the Alpha band as electrophysiological features that will discriminate between different [...] Read more.
Achieving successful human–agent collaboration in the context of smart environments requires the modeling of human behavior for predicting people’s decisions. The goal of the current study was to utilize the TBR and the Alpha band as electrophysiological features that will discriminate between different tasks, each associated with a different depth of reasoning. To that end, we monitored the modulations of the TBR and Alpha, while participants were engaged in performing two cognitive tasks: picking and coordination. In the picking condition (low depth of processing), participants were requested to freely choose a single word out of a string of four words. In the coordination condition (high depth of processing), participants were asked to try and select the same word as an unknown partner that was assigned to them. We performed two types of analyses, one that considers the time factor (i.e., observing dynamic changes across trials) and the other that does not. When the temporal factor was not considered, only Beta was sensitive to the difference between picking and coordination. However, when the temporal factor was included, a transition occurred between cognitive effort and fatigue in the middle stage of the experiment. These results highlight the importance of monitoring the electrophysiological indices, as different factors such as fatigue might affect the instantaneous relative weight of intuitive and deliberate modes of reasoning. Thus, monitoring the response of the human–agent across time in human–agent interactions might turn out to be crucial for smooth coordination in the context of human–computer interaction. Full article
Show Figures

Figure 1

16 pages, 482 KiB  
Article
Location and Time Aware Multitask Allocation in Mobile Crowd-Sensing Based on Genetic Algorithm
by Aridegbe A. Ipaye, Zhigang Chen, Muhammad Asim, Samia Allaoua Chelloug, Lin Guo, Ali M. A. Ibrahim and Ahmed A. Abd El-Latif
Sensors 2022, 22(8), 3013; https://0-doi-org.brum.beds.ac.uk/10.3390/s22083013 - 14 Apr 2022
Cited by 8 | Viewed by 2046
Abstract
Mobile crowd-sensing (MCS) is a well-known paradigm used for obtaining sensed data by using sensors found in smart devices. With the rise of more sensing tasks and workers in the MCS system, it is now essential to design an efficient approach for task [...] Read more.
Mobile crowd-sensing (MCS) is a well-known paradigm used for obtaining sensed data by using sensors found in smart devices. With the rise of more sensing tasks and workers in the MCS system, it is now essential to design an efficient approach for task allocation. Moreover, to ensure the completion of the tasks, it is necessary to incentivise the workers by rewarding them for participating in performing the sensing tasks. In this paper, we aim to assist workers in selecting multiple tasks while considering the time constraint of the worker and the requirements of the task. Furthermore, a pricing mechanism is adopted to determine each task budget, which is then used to determine the payment for the workers based on their willingness factor. This paper proves that the task-allocation is a non-deterministic polynomial (NP)-complete problem, which is difficult to solve by conventional optimization techniques. A worker multitask allocation-genetic algorithm (WMTA-GA) is proposed to solve this problem to maximize the workers welfare. Finally, theoretical analysis demonstrates the effectiveness of the proposed WMTA-GA. We observed that it performs better than the state-of-the-art algorithms in terms of average performance, workers welfare, and the number of assigned tasks. Full article
Show Figures

Figure 1

15 pages, 29389 KiB  
Article
Tracking of a Fixed-Shape Moving Object Based on the Gradient Descent Method
by Haris Masood, Amad Zafar, Muhammad Umair Ali, Tehseen Hussain, Muhammad Attique Khan, Usman Tariq and Robertas Damaševičius
Sensors 2022, 22(3), 1098; https://0-doi-org.brum.beds.ac.uk/10.3390/s22031098 - 31 Jan 2022
Cited by 15 | Viewed by 2987
Abstract
Tracking moving objects is one of the most promising yet the most challenging research areas pertaining to computer vision, pattern recognition and image processing. The challenges associated with object tracking range from problems pertaining to camera axis orientations to object occlusion. In addition, [...] Read more.
Tracking moving objects is one of the most promising yet the most challenging research areas pertaining to computer vision, pattern recognition and image processing. The challenges associated with object tracking range from problems pertaining to camera axis orientations to object occlusion. In addition, variations in remote scene environments add to the difficulties related to object tracking. All the mentioned challenges and problems pertaining to object tracking make the procedure computationally complex and time-consuming. In this paper, a stochastic gradient-based optimization technique has been used in conjunction with particle filters for object tracking. First, the object that needs to be tracked is detected using the Maximum Average Correlation Height (MACH) filter. The object of interest is detected based on the presence of a correlation peak and average similarity measure. The results of object detection are fed to the tracking routine. The gradient descent technique is employed for object tracking and is used to optimize the particle filters. The gradient descent technique allows particles to converge quickly, allowing less time for the object to be tracked. The results of the proposed algorithm are compared with similar state-of-the-art tracking algorithms on five datasets that include both artificial moving objects and humans to show that the gradient-based tracking algorithm provides better results, both in terms of accuracy and speed. Full article
Show Figures

Figure 1

17 pages, 4179 KiB  
Article
Robust Assembly Assistance Using Informed Tree Search with Markov Chains
by Arpad Gellert, Radu Sorostinean and Bogdan-Constantin Pirvu
Sensors 2022, 22(2), 495; https://0-doi-org.brum.beds.ac.uk/10.3390/s22020495 - 10 Jan 2022
Cited by 6 | Viewed by 2042
Abstract
Manual work accounts for one of the largest workgroups in the European manufacturing sector, and improving the training capacity, quality, and speed brings significant competitive benefits to companies. In this context, this paper presents an informed tree search on top of a Markov [...] Read more.
Manual work accounts for one of the largest workgroups in the European manufacturing sector, and improving the training capacity, quality, and speed brings significant competitive benefits to companies. In this context, this paper presents an informed tree search on top of a Markov chain that suggests possible next assembly steps as a key component of an innovative assembly training station for manual operations. The goal of the next step suggestions is to provide support to inexperienced workers or to assist experienced workers by providing choices for the next assembly step in an automated manner without the involvement of a human trainer on site. Data stemming from 179 experiment participants, 111 factory workers, and 68 students, were used to evaluate different prediction methods. From our analysis, Markov chains fail in new scenarios and, therefore, by using an informed tree search to predict the possible next assembly step in such situations, the prediction capability of the hybrid algorithm increases significantly while providing robust solutions to unseen scenarios. The proposed method proved to be the most efficient for next assembly step prediction among all the evaluated predictors and, thus, the most suitable method for an adaptive assembly support system such as for manual operations in industry. Full article
Show Figures

Figure 1

2021

Jump to: 2023, 2022, 2020

21 pages, 3085 KiB  
Article
A Blockchain-IoT Platform for the Smart Pallet Pooling Management
by Chun-Ho Wu, Yung-Po Tsang, Carman Ka-Man Lee and Wai-Ki Ching
Sensors 2021, 21(18), 6310; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186310 - 21 Sep 2021
Cited by 9 | Viewed by 4511
Abstract
Pallet management as a backbone of logistics and supply chain activities is essential to supply chain parties, while a number of regulations, standards and operational constraints are considered in daily operations. In recent years, pallet pooling has been unconventionally advocated to manage pallets [...] Read more.
Pallet management as a backbone of logistics and supply chain activities is essential to supply chain parties, while a number of regulations, standards and operational constraints are considered in daily operations. In recent years, pallet pooling has been unconventionally advocated to manage pallets in a closed-loop system to enhance the sustainability and operational effectiveness, but pitfalls in terms of service reliability, quality compliance and pallet limitation when using a single service provider may occur. Therefore, this study incorporates a decentralisation mechanism into the pallet management to formulate a technological eco-system for pallet pooling, namely Pallet as a Service (PalletaaS), raised by the foundation of consortium blockchain and Internet of things (IoT). Consortium blockchain is regarded as the blockchain 3.0 to facilitate more industrial applications, except cryptocurrency, and the synergy of integrating a consortium blockchain and IoT is thus investigated. The corresponding layered architecture is proposed to structure the system deployment in the industry, in which the location-inventory-routing problem for pallet pooling is formulated. To demonstrate the values of this study, a case analysis to illustrate the human–computer interaction and pallet pooling operations is conducted. Overall, this study standardises the decentralised pallet management in the closed-loop mechanism, resulting in a constructive impact to sustainable development in the logistics industry. Full article
Show Figures

Figure 1

19 pages, 776 KiB  
Systematic Review
User Experience in Social Robots
by Elaheh Shahmir Shourmasti, Ricardo Colomo-Palacios, Harald Holone and Selina Demi
Sensors 2021, 21(15), 5052; https://0-doi-org.brum.beds.ac.uk/10.3390/s21155052 - 26 Jul 2021
Cited by 18 | Viewed by 4692
Abstract
Social robots are increasingly penetrating our daily lives. They are used in various domains, such as healthcare, education, business, industry, and culture. However, introducing this technology for use in conventional environments is not trivial. For users to accept social robots, a positive user [...] Read more.
Social robots are increasingly penetrating our daily lives. They are used in various domains, such as healthcare, education, business, industry, and culture. However, introducing this technology for use in conventional environments is not trivial. For users to accept social robots, a positive user experience is vital, and it should be considered as a critical part of the robots’ development process. This may potentially lead to excessive use of social robots and strengthen their diffusion in society. The goal of this study is to summarize the extant literature that is focused on user experience in social robots, and to identify the challenges and benefits of UX evaluation in social robots. To achieve this goal, the authors carried out a systematic literature review that relies on PRISMA guidelines. Our findings revealed that the most common methods to evaluate UX in social robots are questionnaires and interviews. UX evaluations were found out to be beneficial in providing early feedback and consequently in handling errors at an early stage. However, despite the importance of UX in social robots, robot developers often neglect to set UX goals due to lack of knowledge or lack of time. This study emphasizes the need for robot developers to acquire the required theoretical and practical knowledge on how to perform a successful UX evaluation. Full article
Show Figures

Figure 1

19 pages, 453 KiB  
Article
Power and Radio Resource Management in Femtocell Networks for Interference Mitigation
by Sultan Alotaibi and Hassan Sinky
Sensors 2021, 21(14), 4843; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144843 - 15 Jul 2021
Cited by 8 | Viewed by 2155
Abstract
The growth of mobile traffic volume has been exploded because of the rapid improvement of mobile devices and their applications. Heterogeneous networks (HetNets) can be an attractive solution in order to adopt the exponential growth of wireless data. Femtocell networks are accommodated within [...] Read more.
The growth of mobile traffic volume has been exploded because of the rapid improvement of mobile devices and their applications. Heterogeneous networks (HetNets) can be an attractive solution in order to adopt the exponential growth of wireless data. Femtocell networks are accommodated within the concept of HetNets. The implementation of femtocell networks has been considered as an innovative approach that can improve the network’s capacity. However, dense implementation and installation of femtocells would introduce interference, which reduces the network’s performance. Interference occurs when two adjacent femtocells are operated with the same radio resources. In this work, a scheme, which comprises two stages, is proposed. The first step is to distribute radio resources among femtocells, where each femtocell can identify the source of the interference. A constructed table is generated in order to measure the level of interference for each femtocell. Accordingly, the level of interference for each sub-channel can be recognized by all femtocells. The second stage includes a mechanism that helps femtocell base stations adjust their transmission power autonomously to alleviate the interference. It enforces a cost function, which should be realized by each femtocell. The cost function is calculated based on the production of undesirable interference impact, which is introduced by each femtocell. Hence, the transmission power is adjusted autonomously, where undesirable interference can be monitored and alleviated. The proposed scheme is evaluated through a MATLAB simulation and compared with other approaches. The simulation results show an improvement in the network’s capacity. Furthermore, the unfavorable impact of the interference can be managed and alleviated. Full article
Show Figures

Figure 1

23 pages, 1328 KiB  
Review
Business Simulation Games Analysis Supported by Human-Computer Interfaces: A Systematic Review
by Cleiton Pons Ferreira, Carina Soledad González-González and Diana Francisca Adamatti
Sensors 2021, 21(14), 4810; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144810 - 14 Jul 2021
Cited by 11 | Viewed by 3245
Abstract
This article performs a Systematic Review of studies to answer the question: What are the researches related to the learning process with (Serious) Business Games using data collection techniques with Electroencephalogram or Eye tracking signals? The PRISMA declaration method was used to guide [...] Read more.
This article performs a Systematic Review of studies to answer the question: What are the researches related to the learning process with (Serious) Business Games using data collection techniques with Electroencephalogram or Eye tracking signals? The PRISMA declaration method was used to guide the search and inclusion of works related to the elaboration of this study. The 19 references resulting from the critical evaluation initially point to a gap in investigations into using these devices to monitor serious games for learning in organizational environments. An approximation with equivalent sensing studies in serious games for the contribution of skills and competencies indicates that continuous monitoring measures, such as mental state and eye fixation, proved to identify the players’ attention levels effectively. Also, these studies showed effectiveness in the flow at different moments of the task, motivating and justifying the replication of these studies as a source of insights for the optimized design of business learning tools. This study is the first systematic review and consolidates the existing literature on user experience analysis of business simulation games supported by human-computer interfaces. Full article
Show Figures

Figure 1

26 pages, 3397 KiB  
Article
A Qualitative Approach to Help Adjust the Design of Management Subjects in ICT Engineering Undergraduate Programs through User Experience in a Smart Classroom Context
by Josep Petchamé, Ignasi Iriondo, Eva Villegas, David Fonseca, Susana Romero Yesa and Marian Aláez
Sensors 2021, 21(14), 4762; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144762 - 12 Jul 2021
Cited by 10 | Viewed by 2814
Abstract
Qualitative research activities, including first-day of class surveys and user experience interviews on completion of a subject were carried out to obtain students’ feedback in order to improve the design of the subject ‘Information Systems’ as a part of a general initiative to [...] Read more.
Qualitative research activities, including first-day of class surveys and user experience interviews on completion of a subject were carried out to obtain students’ feedback in order to improve the design of the subject ‘Information Systems’ as a part of a general initiative to enhance ICT (Information and Communication Technologies) engineering programs. Due to the COVID-19 (corona virus disease 2019) pandemic, La Salle URL adopted an Emergency Remote Teaching tactical solution in the second semester of the 2019–2020 academic year, just before implementing a strategic learning approach based on a new Smart Classroom (SC) system deployed in the campus facilities. The latter solution was developed to ensure that both on-campus and off-campus students could effectively follow the course syllabus through the use of new technological devices introduced in classrooms and laboratories, reducing the inherent difficulties of online learning. The results of our findings show: (1) No major concerns about the subject were identified by students; (2) Interaction and class dynamics were the main issues identified by students, while saving time on commuting when learning from home and access to recorded class sessions were the aspects that students considered the most advantageous about the SC. Full article
Show Figures

Figure 1

19 pages, 6703 KiB  
Article
Gamification and Hazard Communication in Virtual Reality: A Qualitative Study
by Janaina Cavalcanti, Victor Valls, Manuel Contero and David Fonseca
Sensors 2021, 21(14), 4663; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144663 - 07 Jul 2021
Cited by 23 | Viewed by 3879
Abstract
An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an [...] Read more.
An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an incorrect action/decision, and creates a smart training environment. The present study analyzes the user experience in a gamified virtual environment of risks using the HTC Vive head-mounted display. The game was developed in the Unreal game engine and consisted of a walk-through maze composed of evident dangers and different signaling variables while user action data were recorded. To demonstrate which aspects provide better interaction, experience, perception and memory, three different warning configurations (dynamic, static and smart) and two different levels of danger (low and high) were presented. To properly assess the impact of the experience, we conducted a survey about personality and knowledge before and after using the game. We proceeded with the qualitative approach by using questions in a bipolar laddering assessment that was compared with the recorded data during the game. The findings indicate that when users are engaged in VR, they tend to test the consequences of their actions rather than maintaining safety. The results also reveal that textual signal variables are not accessed when users are faced with the stress factor of time. Progress is needed in implementing new technologies for warnings and advance notifications to improve the evaluation of human behavior in virtual environments of high-risk surroundings. Full article
Show Figures

Figure 1

18 pages, 4998 KiB  
Article
Measuring User Experience, Usability and Interactivity of a Personalized Mobile Augmented Reality Training System
by Christos Papakostas, Christos Troussas, Akrivi Krouska and Cleo Sgouropoulou
Sensors 2021, 21(11), 3888; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113888 - 04 Jun 2021
Cited by 48 | Viewed by 5060
Abstract
Innovative technology has been an important part of firefighting, as it advances firefighters’ safety and effectiveness. Prior research has examined the implementation of training systems using augmented reality (AR) in other domains, such as welding, aviation, army, and mathematics, offering significant pedagogical affordances. [...] Read more.
Innovative technology has been an important part of firefighting, as it advances firefighters’ safety and effectiveness. Prior research has examined the implementation of training systems using augmented reality (AR) in other domains, such as welding, aviation, army, and mathematics, offering significant pedagogical affordances. Nevertheless, firefighting training systems using AR are still an under-researched area. The increasing penetration of AR for training is the driving force behind this study, and the scope is to analyze the main aspects affecting the acceptance of AR by firefighters. The current research uses a technology acceptance model, extended by the external constructs of perceived interactivity and personalization, to consider both the system and individual level. The proposed model was evaluated by a sample of 200 users, and the results show that both the external variables of perceived interactivity and perceived personalization are prerequisite factors in extending the TAM model. The findings reveal that the usability is the strongest predictor of firefighters’ behavioral intentions to use the AR system, followed by the ease of use with smaller, yet meaningful, direct and indirect effects on firefighters’ intentions. The identified acceptance factors help AR developers enhance the firefighters’ experience in training operations. Full article
Show Figures

Figure 1

18 pages, 4393 KiB  
Article
Assembly Assistance System with Decision Trees and Ensemble Learning
by Radu Sorostinean, Arpad Gellert and Bogdan-Constantin Pirvu
Sensors 2021, 21(11), 3580; https://0-doi-org.brum.beds.ac.uk/10.3390/s21113580 - 21 May 2021
Cited by 13 | Viewed by 2743
Abstract
This paper presents different prediction methods based on decision tree and ensemble learning to suggest possible next assembly steps. The predictor is designed to be a component of a sensor-based assembly assistance system whose goal is to provide support via adaptive instructions, considering [...] Read more.
This paper presents different prediction methods based on decision tree and ensemble learning to suggest possible next assembly steps. The predictor is designed to be a component of a sensor-based assembly assistance system whose goal is to provide support via adaptive instructions, considering the assembly progress and, in the future, the estimation of user emotions during training. The assembly assistance station supports inexperienced manufacturing workers, but it can be useful in assisting experienced workers, too. The proposed predictors are evaluated on the data collected in experiments involving both trainees and manufacturing workers, as well as on a mixed dataset, and are compared with other existing predictors. The novelty of the paper is the decision tree-based prediction of the assembly states, in contrast with the previous algorithms which are stochastic-based or neural. The results show that ensemble learning with decision tree components is best suited for adaptive assembly support systems. Full article
Show Figures

Figure 1

24 pages, 3341 KiB  
Article
Visual Echolocation Concept for the Colorophone Sensory Substitution Device Using Virtual Reality
by Patrycja Bizoń-Angov, Dominik Osiński, Michał Wierzchoń and Jarosław Konieczny
Sensors 2021, 21(1), 237; https://0-doi-org.brum.beds.ac.uk/10.3390/s21010237 - 01 Jan 2021
Cited by 4 | Viewed by 3645
Abstract
Detecting characteristics of 3D scenes is considered one of the biggest challenges for visually impaired people. This ability is nonetheless crucial for orientation and navigation in the natural environment. Although there are several Electronic Travel Aids aiming at enhancing orientation and mobility for [...] Read more.
Detecting characteristics of 3D scenes is considered one of the biggest challenges for visually impaired people. This ability is nonetheless crucial for orientation and navigation in the natural environment. Although there are several Electronic Travel Aids aiming at enhancing orientation and mobility for the blind, only a few of them combine passing both 2D and 3D information, including colour. Moreover, existing devices either focus on a small part of an image or allow interpretation of a mere few points in the field of view. Here, we propose a concept of visual echolocation with integrated colour sonification as an extension of Colorophone—an assistive device for visually impaired people. The concept aims at mimicking the process of echolocation and thus provides 2D, 3D and additionally colour information of the whole scene. Even though the final implementation will be realised by a 3D camera, it is first simulated, as a proof of concept, by using VIRCO—a Virtual Reality training and evaluation system for Colorophone. The first experiments showed that it is possible to sonify colour and distance of the whole scene, which opens up a possibility to implement the developed algorithm on a hardware-based stereo camera platform. An introductory user evaluation of the system has been conducted in order to assess the effectiveness of the proposed solution for perceiving distance, position and colour of the objects placed in Virtual Reality. Full article
Show Figures

Figure 1

2020

Jump to: 2023, 2022, 2021

20 pages, 4276 KiB  
Article
Gaze in the Dark: Gaze Estimation in a Low-Light Environment with Generative Adversarial Networks
by Jung-Hwa Kim and Jin-Woo Jeong
Sensors 2020, 20(17), 4935; https://0-doi-org.brum.beds.ac.uk/10.3390/s20174935 - 31 Aug 2020
Cited by 5 | Viewed by 3252
Abstract
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered [...] Read more.
In smart interactive environments, such as digital museums or digital exhibition halls, it is important to accurately understand the user’s intent to ensure successful and natural interaction with the exhibition. In the context of predicting user intent, gaze estimation technology has been considered one of the most effective indicators among recently developed interaction techniques (e.g., face orientation estimation, body tracking, and gesture recognition). Previous gaze estimation techniques, however, are known to be effective only in a controlled lab environment under normal lighting conditions. In this study, we propose a novel deep learning-based approach to achieve a successful gaze estimation under various low-light conditions, which is anticipated to be more practical for smart interaction scenarios. The proposed approach utilizes a generative adversarial network (GAN) to enhance users’ eye images captured under low-light conditions, thereby restoring missing information for gaze estimation. Afterward, the GAN-recovered images are fed into the convolutional neural network architecture as input data to estimate the direction of the user gaze. Our experimental results on the modified MPIIGaze dataset demonstrate that the proposed approach achieves an average performance improvement of 4.53%–8.9% under low and dark light conditions, which is a promising step toward further research. Full article
Show Figures

Figure 1

Back to TopTop