Next Issue
Volume 11, August
Previous Issue
Volume 11, June
 
 

Computers, Volume 11, Issue 7 (July 2022) – 16 articles

Cover Story (view full-size image): Many applications in the Internet of Things require a dense RFID network for efficient deployment and coverage, which causes interference between tags and readers and reduces the performance of the system. Therefore, communication resource management is required to avoid such problems. In this paper, we propose an anti‐collision protocol based on feed‐forward Artificial Neural Network methodology for distributed learning between RFID readers to predict collisions and ensure efficient resource allocation (DMLAR) by considering the mobility of tags and readers. The evaluation of our anti‐collision protocol is performed for healthcare where the collected data are critical and must respect the terms of throughput, delay, overload, integrity and energy. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 5147 KiB  
Article
Automated Detection of Improper Sitting Postures in Computer Users Based on Motion Capture Sensors
by Firgan Feradov, Valentina Markova and Todor Ganchev
Computers 2022, 11(7), 116; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070116 - 20 Jul 2022
Cited by 5 | Viewed by 2533
Abstract
Prolonged computer-related work can be linked to musculoskeletal disorders (MSD) in the upper limbs and improper posture. In this regard, we report on developing resources supporting improper posture studies based on motion capture sensors. These resources were used to create a baseline detector [...] Read more.
Prolonged computer-related work can be linked to musculoskeletal disorders (MSD) in the upper limbs and improper posture. In this regard, we report on developing resources supporting improper posture studies based on motion capture sensors. These resources were used to create a baseline detector for the automated detection of improper sitting postures, which was next used to evaluate the applicability of Hjorth’s parameters—Activity, Mobility and Complexity—on the specific classification task. Specifically, based on accelerometer data, we computed Hjorth’s time-domain parameters, which we stacked as feature vectors and fed to a binary classifier (kNN, decision tree, linear SVM and Gaussian SVM). The experimental evaluation in a setup involving two different keyboard types (standard and ergonomic) validated the practical worth of the proposed sitting posture detection method, and we reported an average classification accuracy of up to 98.4%. We deem that this research contributes toward creating an automated system for improper posture monitoring for people working on a computer for prolonged periods. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

19 pages, 3508 KiB  
Article
Mitigation of Black-Box Attacks on Intrusion Detection Systems-Based ML
by Shahad Alahmed, Qutaiba Alasad, Maytham M. Hammood, Jiann-Shiun Yuan and Mohammed Alawad
Computers 2022, 11(7), 115; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070115 - 20 Jul 2022
Cited by 6 | Viewed by 2288
Abstract
Intrusion detection systems (IDS) are a very vital part of network security, as they can be used to protect the network from illegal intrusions and communications. To detect malicious network traffic, several IDS based on machine learning (ML) methods have been developed in [...] Read more.
Intrusion detection systems (IDS) are a very vital part of network security, as they can be used to protect the network from illegal intrusions and communications. To detect malicious network traffic, several IDS based on machine learning (ML) methods have been developed in the literature. Machine learning models, on the other hand, have recently been proved to be effective, since they are vulnerable to adversarial perturbations, which allows the opponent to crash the system while performing network queries. This motivated us to present a defensive model that uses adversarial training based on generative adversarial networks (GANs) as a defense strategy to offer better protection for the system against adversarial perturbations. The experiment was carried out using random forest as a classifier. In addition, both principal component analysis (PCA) and recursive features elimination (Rfe) techniques were leveraged as a feature selection to diminish the dimensionality of the dataset, and this led to enhancing the performance of the model significantly. The proposal was tested on a realistic and recent public network dataset: CSE-CICIDS2018. The simulation results showed that GAN-based adversarial training enhanced the resilience of the IDS model and mitigated the severity of the black-box attack. Full article
(This article belongs to the Special Issue IoT: Security, Privacy and Best Practices)
Show Figures

Figure 1

12 pages, 1704 KiB  
Review
High-Performance Computing in Meteorology under a Context of an Era of Graphical Processing Units
by Tosiyuki Nakaegawa
Computers 2022, 11(7), 114; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070114 - 13 Jul 2022
Cited by 2 | Viewed by 3231
Abstract
This short review shows how innovative processing units—including graphical processing units (GPUs)—are used in high-performance computing (HPC) in meteorology, introduces current scientific studies relevant to HPC, and discusses the latest topics in meteorology accelerated by HPC computers. The current status surrounding HPC is [...] Read more.
This short review shows how innovative processing units—including graphical processing units (GPUs)—are used in high-performance computing (HPC) in meteorology, introduces current scientific studies relevant to HPC, and discusses the latest topics in meteorology accelerated by HPC computers. The current status surrounding HPC is distinctly complicated in both hardware and software terms, and flows similar to fast cascades. It is difficult to understand and follow the status for beginners; they need to overcome the obstacle of catching up on the information on HPC and connecting it to their studies. HPC systems have accelerated weather forecasts with physical-based models since Richardson’s dream in 1922. Meteorological scientists and model developers have written the codes of the models by making the most of the latest HPC technologies available at the time. Several of the leading HPC systems used for weather forecast models are introduced. Each institute chose an HPC system from many possible alternatives to best match its purposes. Six of the selected latest topics in high-performance computing in meteorology are also reviewed: floating points; spectral transform in global weather models; heterogeneous computing; exascale computing; co-design; and data-driven weather forecasts. Full article
Show Figures

Figure 1

21 pages, 649 KiB  
Article
Medical-Waste Chain: A Medical Waste Collection, Classification and Treatment Management by Blockchain Technology
by Hai Trieu Le, Khoi Le Quoc, The Anh Nguyen, Khoa Tran Dang, Hong Khanh Vo, Huong Hoang Luong, Hieu Le Van, Khiem Huynh Gia, Loc Van Cao Phu, Duy Nguyen Truong Quoc, Tran Huyen Nguyen, Ha Xuan Son and Nghia Duong-Trung
Computers 2022, 11(7), 113; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070113 - 09 Jul 2022
Cited by 26 | Viewed by 3579
Abstract
To prevent the spread of the COVID-19 pandemic, 2019 has seen unprecedented demand for medical equipment and supplies. However, the problem of waste treatment has not yet been given due attention, i.e., the traditional waste treatment process is done independently, and it is [...] Read more.
To prevent the spread of the COVID-19 pandemic, 2019 has seen unprecedented demand for medical equipment and supplies. However, the problem of waste treatment has not yet been given due attention, i.e., the traditional waste treatment process is done independently, and it is not easy to share the necessary information. Especially during the COVID-19 pandemic, the interaction between parties is minimized to limit infections. To evaluate the current system at medical centers, we also refer to the traditional waste treatment processes of four hospitals in Can Tho and Ho Chi Minh cities (Vietnam). Almost all hospitals are handled independently, lacking any interaction between the stakeholders. In this article, we propose a decentralized blockchain-based system for automating waste treatment processes for medical equipment and supplies after usage among the relevant parties, named Medical-Waste Chain. It consists of four components: medical equipment and supplies, waste centers, recycling plants, and sorting factories. Medical-Waste Chain integrates blockchain-based Hyperledger Fabric technology with decentralized storage of medical equipment and supply information, and securely shares related data with stakeholders. We present the system design, along with the interactions among the stakeholders, to ensure the minimization of medical waste generation. We evaluate the performance of the proposed solution using system-wide timing and latency analysis based on the Hyperledger Caliper engine. Our system is developed based on the hybrid-blockchain system, so it is fully scalable for both on-chain and off-chain-based extensions. Moreover, the participants do not need to pay any fees to use and upgrade the system. To encourage future use of Medical-Waste Chain, we also share a proof-of-concept on our Github repository. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

23 pages, 1316 KiB  
Article
Proposal of a Model for the Analysis of the State of the Use of ICT in Education Applied to Technological Institutes of Higher Education
by William Villegas-Ch., Santiago Jácome-Vásconez, Joselin García-Ortiz, Javier Calvache-Sánchez and Santiago Sánchez-Viteri
Computers 2022, 11(7), 112; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070112 - 08 Jul 2022
Cited by 1 | Viewed by 2102
Abstract
The inclusion of information and communication technologies in education has become a priority for all educational models, particularly for higher education institutes that have observed the need to integrate these technologies in the classroom. However, to guarantee educational quality and learning, establishing a [...] Read more.
The inclusion of information and communication technologies in education has become a priority for all educational models, particularly for higher education institutes that have observed the need to integrate these technologies in the classroom. However, to guarantee educational quality and learning, establishing a process that allows the identification of the response of the students towards its use is necessary. For this purpose, there are several works that address the issue and have determined the functionality of these technologies, but each environment is different, and this is recognized by the higher education institutes of Ecuador that have limited economic, technological, and academic resources. This work seeks to create a method that allows the needs and doubts of students about the use of educational technologies in the classroom to be established without affecting their academic performance. To perform this, a process has been designed that identifies learning needs through the validation of data obtained from surveys and a comparison of two groups of students, in which one group makes use of technologies in the classroom and the other group uses a model of traditional education. By obtaining the results of the analysis, the method determines the impact of technology on learning. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

26 pages, 4472 KiB  
Article
Multi-Controllers Placement Optimization in SDN by the Hybrid HSA-PSO Algorithm
by Neamah S. Radam, Sufyan T. Faraj Al-Janabi and Khalid Sh. Jasim
Computers 2022, 11(7), 111; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070111 - 04 Jul 2022
Cited by 10 | Viewed by 2330
Abstract
Software-Defined Networking (SDN) is a developing architecture that provides scalability, flexibility, and efficient network management. However, optimal controller placement faces many problems, which affect the performance of the overall network. To resolve the Multi-controller SDN (MC-SDN) that is deployed in the SDN environment, [...] Read more.
Software-Defined Networking (SDN) is a developing architecture that provides scalability, flexibility, and efficient network management. However, optimal controller placement faces many problems, which affect the performance of the overall network. To resolve the Multi-controller SDN (MC-SDN) that is deployed in the SDN environment, we propose an approach that uses a hybrid metaheuristic algorithm that improves network performance. Initially, the proposed SDN network is constructed based on graph theory, which improves the connectivity and flexibility between switches and controllers. After that, the controller selection is performed by selecting an optimal controller from multiple controllers based on controller features using the firefly optimization algorithm (FA), which improves the network performance. Finally, multi-controller placement is performed to reduce the communication latency between the switch to controllers. Here, multiple controllers are placed by considering location and distance using a hybrid metaheuristic algorithm, which includes a harmonic search algorithm and particle swarm optimization algorithm (HSA-PSO), in which the PSO algorithm is proposed to automatically update the harmonic search parameters. The simulation of multi-controller placement is carried out by the CloudsimSDN network simulator, and the simulation results demonstrate the proposed advantages in terms of propagation latency, Round Trip Time (RTT), matrix of Time Session (TS), delay, reliability, and throughput. Full article
Show Figures

Figure 1

15 pages, 3216 KiB  
Article
Walsh–Hadamard Kernel Feature-Based Image Compression Using DCT with Bi-Level Quantization
by Dibyalekha Nayak, Kananbala Ray, Tejaswini Kar and Chiman Kwan
Computers 2022, 11(7), 110; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070110 - 04 Jul 2022
Cited by 1 | Viewed by 2071
Abstract
To meet the high bit rate requirements in many multimedia applications, a lossy image compression algorithm based on Walsh–Hadamard kernel-based feature extraction, discrete cosine transform (DCT), and bi-level quantization is proposed in this paper. The selection of the quantization matrix of the block [...] Read more.
To meet the high bit rate requirements in many multimedia applications, a lossy image compression algorithm based on Walsh–Hadamard kernel-based feature extraction, discrete cosine transform (DCT), and bi-level quantization is proposed in this paper. The selection of the quantization matrix of the block is made based on a weighted combination of the block feature strength (BFS) of the block extracted by projecting the selected Walsh–Hadamard basis kernels on an image block. The BFS is compared with an automatically generated threshold for applying the specific quantization matrix for compression. In this paper, higher BFS blocks are processed via DCT and high Q matrix, and blocks with lower feature strength are processed via DCT and low Q matrix. So, blocks with higher feature strength are less compressed and vice versa. The proposed algorithm is compared to different DCT and block truncation coding (BTC)-based approaches based on the quality parameters, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) at constant bits per pixel (bpp). The proposed method shows significant improvements in performance over standard JPEG and recent approaches at lower bpp. It achieved an average PSNR of 35.61 dB and an average SSIM of 0.90 at a bpp of 0.5 and better perceptual quality with lower visual artifacts. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

14 pages, 855 KiB  
Article
A New Development of FDOSM Based on a 2-Tuple Fuzzy Environment: Evaluation and Benchmark of Network Protocols as a Case Study
by Rand M. Maher, Mahmood M. Salih, Harith A. Hussein and Mohamed A. Ahmed
Computers 2022, 11(7), 109; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070109 - 01 Jul 2022
Cited by 3 | Viewed by 1599
Abstract
Multicriteria decision-making (MCDM) is one of the most common methods used to select the best alternative from a set of available alternatives. Many methods in MCDM are presented in the academic literature, with the latest being the Fuzzy Decision by Opinion Score Method [...] Read more.
Multicriteria decision-making (MCDM) is one of the most common methods used to select the best alternative from a set of available alternatives. Many methods in MCDM are presented in the academic literature, with the latest being the Fuzzy Decision by Opinion Score Method (FDOSM). The FDOSM can solve many challenges that are present in other MCDM methods. However, several problems still exist in the FDOSM and its extensions, such as uncertainty. One of the most significant problems in the use of the FDOSM is the loss of information during the conversion of a decision matrix into an opinion decision matrix. In this paper, the authors expanded the FDOSM into the 2-tuple-FDOSM to solve this problem. The methodology behind the development of the 2-tuple-FDOSM was presented. Within the methodology, definitions of the 2-tuple linguistic fuzzy method, which was used to solve the loss-of-information problem that is present in the FDSOM method, are presented. A network case study was used in the application of the 2-tuple-FDOSM. The final results show that the 2-tuple-FDOSM can be used to address the problem of loss of information. Finally, a comparison between the basic FDOSM, TOPSIS, and 2-tuple-FDOSM was presented. Full article
Show Figures

Graphical abstract

13 pages, 1348 KiB  
Article
Enhancing GAN-LCS Performance Using an Abbreviations Checker in Automatic Short Answer Scoring
by Ar-Razy Muhammad, Adhistya Erna Permanasari and Indriana Hidayah
Computers 2022, 11(7), 108; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070108 - 01 Jul 2022
Cited by 2 | Viewed by 1888
Abstract
Automatic short answer scoring methods have been developed with various algorithms over the decades. In the Indonesian language, the string-based similarity is more commonly used. This method is difficult to accurately measure the similarity of two sentences with significantly different word lengths. This [...] Read more.
Automatic short answer scoring methods have been developed with various algorithms over the decades. In the Indonesian language, the string-based similarity is more commonly used. This method is difficult to accurately measure the similarity of two sentences with significantly different word lengths. This problem has been handled by the Geometric Average Normalized-Longest Common Subsequence (GAN-LCS) method by eliminating non-contributive words utilizing the Longest Common Subsequence method. However, students’ answers may vary not only in character length but also in the words they choose. For instance, some students tend only to write the abbreviations or acronyms of the phrase instead of writing meaningful words. As a result, it will reduce the intersection character between the reference answer and the student answer. Moreover, it can change the sentence structure even though it has the same meaning by definition. Therefore, this study aims to improve GAN-LCS method performance by incorporating the abbreviation checker to handle the abbreviations or acronyms found in the reference answer or student answer. The dataset used in this study consisted of 10 questions with 1 reference answer for each question and 585 student answers. The experimental results show an improvement in GAN-LCS performance that could run 34.43% faster. Meanwhile, the Root Mean Square Error (RSME) value became lower by 7.65% and the correlation value was increased by 8%. Looking forward, future studies may continue to investigate a method for automatically generate the abbreviations dictionary. Full article
(This article belongs to the Special Issue Interactive Technology and Smart Education)
Show Figures

Figure 1

20 pages, 6711 KiB  
Article
DMLAR: Distributed Machine Learning-Based Anti-Collision Algorithm for RFID Readers in the Internet of Things
by Rachid Mafamane, Mourad Ouadou, Hajar Sahbani, Nisrine Ibadah and Khalid Minaoui
Computers 2022, 11(7), 107; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070107 - 30 Jun 2022
Cited by 4 | Viewed by 2061
Abstract
Radio Frequency Identification (RFID) is considered as one of the most widely used wireless identification technologies in the Internet of Things. Many application areas require a dense RFID network for efficient deployment and coverage, which causes interference between RFID tags and readers, and [...] Read more.
Radio Frequency Identification (RFID) is considered as one of the most widely used wireless identification technologies in the Internet of Things. Many application areas require a dense RFID network for efficient deployment and coverage, which causes interference between RFID tags and readers, and reduces the performance of the RFID system. Therefore, communication resource management is required to avoid such problems. In this paper, we propose an anti-collision protocol based on feed-forward Artificial Neural Network methodology for distributed learning between RFID readers to predict collisions and ensure efficient resource allocation (DMLAR) by considering the mobility of tags and readers. The evaluation of our anti-collision protocol is performed for different mobility scenarios in healthcare where the collected data are critical and must respect the terms of throughput, delay, overload, integrity and energy. The dataset created and distributed by the readers allows an efficient learning process and, therefore, a high collision detection to increase throughput and minimize data loss. In the application phase, the readers do not need to exchange control packets with each other to control the resource allocation, which avoids network overload and communication delay. Simulation results show the robustness and effectiveness of the anti-collision protocol by the number of readers and resources used. The model used allows a large number of readers to use the most suitable frequency and time resources for simultaneous and successful tag interrogation. Full article
(This article belongs to the Special Issue Edge Computing for the IoT)
Show Figures

Graphical abstract

17 pages, 5625 KiB  
Article
Implementation of a Programmable Electronic Load for Equipment Testing
by León Felipe Serna-Motoya, José R. Ortiz-Castrillón, Paula Andrea Gil-Vargas, Nicolás Muñoz-Galeano, Juan Bernardo Cano-Quintero and Jesús M. López-Lezama
Computers 2022, 11(7), 106; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070106 - 28 Jun 2022
Cited by 2 | Viewed by 2194
Abstract
This paper presents the implementation of an AC three-phase programmable electronic load (PEL) that emulates load profiles and can be used for testing equipment in microgrids (MGs). The implemented PEL topology is built with a voltage source inverter (VSI) which works as a [...] Read more.
This paper presents the implementation of an AC three-phase programmable electronic load (PEL) that emulates load profiles and can be used for testing equipment in microgrids (MGs). The implemented PEL topology is built with a voltage source inverter (VSI) which works as a current controlled source and a Buck converter which permits the dissipation of active power excess. The PEL operation modes according to the interchange of active and reactive power and its operation in four quadrants were determined. The power and current limits which establish the control limitations were also obtained. Three control loops were implemented to independently regulate active and reactive power and ensure energy balance in the system. The main contribution of this paper is the presentation a detailed analysis regarding hardware limitations and the operation of the VSI and Buck converter working together. The PEL was implemented for a power of 1.8 kVA. Several experimental results were carried out with inductive, capacitive, and resistive scenarios to validate the proper operation of the PEL. Experimental tests showed the correct behavior of the AC three-phase currents, VSI input voltage, and Buck converter output voltage of the PEL for profile changes, including transient response. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

19 pages, 362 KiB  
Article
Selection and Location of Fixed-Step Capacitor Banks in Distribution Grids for Minimization of Annual Operating Costs: A Two-Stage Approach
by Oscar Danilo Montoya, Edwin Rivas-Trujillo and Diego Armando Giral-Ramírez
Computers 2022, 11(7), 105; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070105 - 27 Jun 2022
Viewed by 1614
Abstract
The problem regarding the optimal location and sizing of fixed-step capacitor banks in distribution networks with radial configuration is studied in this research by applying a two-stage optimization approach. The first stage consists of determining the nodes where the capacitor banks will be [...] Read more.
The problem regarding the optimal location and sizing of fixed-step capacitor banks in distribution networks with radial configuration is studied in this research by applying a two-stage optimization approach. The first stage consists of determining the nodes where the capacitor banks will be placed. In this stage, the exact mixed-integer nonlinear programming (MINLP) model that represents the studied problem is transformed into a mixed-integer quadratic convex (MIQC) model. The solution of the MIQC model ensures that the global optimum is reached given the convexity of the solution space for each combination of nodes where the capacitor banks will be installed. With the solution of the MIQC, the suitable nodes for the installation of the fixed-step capacitors are fixed, and their sizes are recursively evaluated in a power flow methodology that allows for determining the optimal sizes. In the second stage, the successive approximation power flow method is applied to determine the optimal sizes assigned to these compensation devices. Numerical results in three test feeders with 33, 69, and 85 buses demonstrate the effectiveness of the proposed two-stage solution method for two operation scenarios: (i) operation of the distribution system under peak load conditions throughout the year, and (ii) operation considering daily demand variations and renewable generation penetration. Comparative results with the GAMS software confirm the excellent results reached using the proposed optimization approach. All the simulations were carried out in the MATLAB programming environment, version 2021b, as well as using the Gurobi solver in the convex programming tool known as CVX. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

17 pages, 3901 KiB  
Article
Comparison of On-Policy Deep Reinforcement Learning A2C with Off-Policy DQN in Irrigation Optimization: A Case Study at a Site in Portugal
by Khadijeh Alibabaei, Pedro D. Gaspar, Eduardo Assunção, Saeid Alirezazadeh, Tânia M. Lima, Vasco N. G. J. Soares and João M. L. P. Caldeira
Computers 2022, 11(7), 104; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070104 - 24 Jun 2022
Cited by 11 | Viewed by 4245
Abstract
Precision irrigation and optimization of water use have become essential factors in agriculture because water is critical for crop growth. The proper management of an irrigation system should enable the farmer to use water efficiently to increase productivity, reduce production costs, and maximize [...] Read more.
Precision irrigation and optimization of water use have become essential factors in agriculture because water is critical for crop growth. The proper management of an irrigation system should enable the farmer to use water efficiently to increase productivity, reduce production costs, and maximize the return on investment. Efficient water application techniques are essential prerequisites for sustainable agricultural development based on the conservation of water resources and preservation of the environment. In a previous work, an off-policy deep reinforcement learning model, Deep Q-Network, was implemented to optimize irrigation. The performance of the model was tested for tomato crop at a site in Portugal. In this paper, an on-policy model, Advantage Actor–Critic, is implemented to compare irrigation scheduling with Deep Q-Network for the same tomato crop. The results show that the on-policy model Advantage Actor–Critic reduced water consumption by 20% compared to Deep Q-Network with a slight change in the net reward. These models can be developed to be applied to other cultures with high production in Portugal, such as fruit, cereals, and wine, which also have large water requirements. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 2919 KiB  
Article
Deploying Serious Games for Cognitive Rehabilitation
by Damiano Perri, Marco Simonetti and Osvaldo Gervasi
Computers 2022, 11(7), 103; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070103 - 23 Jun 2022
Cited by 2 | Viewed by 2165
Abstract
The telerehabilitation of patients with neurological lesions has recently assumed significant importance due to the COVID-19 pandemic, which has reduced the possibility of access to healthcare facilities by patients. Therefore, the possibility of exercise for these patients safely in their own homes has [...] Read more.
The telerehabilitation of patients with neurological lesions has recently assumed significant importance due to the COVID-19 pandemic, which has reduced the possibility of access to healthcare facilities by patients. Therefore, the possibility of exercise for these patients safely in their own homes has emerged as an essential need. Our efforts aim to provide an easy-to-implement and open-source methodology that provides doctors with a set of simple, low-cost tools to create and manage patient-adapted virtual reality telerehabilitation batteries of exercises. This is particularly important because many studies show that immediate action and appropriate, specific rehabilitation can guarantee satisfactory results. Appropriate therapy is based on crucial factors, such as the frequency, intensity, and specificity of the exercises. Our work’s most evident result is the definition of a methodology that allows the development of rehabilitation exercises with a limited effect in both economic and implementation terms, using software tools accessible to all. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

24 pages, 10471 KiB  
Article
Meta Deep Learn Leaf Disease Identification Model for Cotton Crop
by Muhammad Suleman Memon, Pardeep Kumar and Rizwan Iqbal
Computers 2022, 11(7), 102; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070102 - 22 Jun 2022
Cited by 34 | Viewed by 6544
Abstract
Agriculture is essential to the growth of every country. Cotton and other major crops fall into the cash crops. Cotton is affected by most of the diseases that cause significant crop damage. Many diseases affect yield through the leaf. Detecting disease early saves [...] Read more.
Agriculture is essential to the growth of every country. Cotton and other major crops fall into the cash crops. Cotton is affected by most of the diseases that cause significant crop damage. Many diseases affect yield through the leaf. Detecting disease early saves crop from further damage. Cotton is susceptible to several diseases, including leaf spot, target spot, bacterial blight, nutrient deficiency, powdery mildew, leaf curl, etc. Accurate disease identification is important for taking effective measures. Deep learning in the identification of plant disease plays an important role. The proposed model based on meta Deep Learning is used to identify several cotton leaf diseases accurately. We gathered cotton leaf images from the field for this study. The dataset contains 2385 images of healthy and diseased leaves. The size of the dataset was increased with the help of the data augmentation approach. The dataset was trained on Custom CNN, VGG16 Transfer Learning, ResNet50, and our proposed model: the meta deep learn leaf disease identification model. A meta learning technique has been proposed and implemented to provide a good accuracy and generalization. The proposed model has outperformed the Cotton Dataset with an accuracy of 98.53%. Full article
Show Figures

Figure 1

24 pages, 2100 KiB  
Article
Learning-Oriented QoS- and Drop-Aware Task Scheduling for Mixed-Criticality Systems
by Behnaz Ranjbar, Hamidreza Alikhani, Bardia Safaei, Alireza Ejlali and Akash Kumar
Computers 2022, 11(7), 101; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11070101 - 22 Jun 2022
Cited by 3 | Viewed by 1836
Abstract
In Mixed-Criticality (MC) systems, multiple functions with different levels of criticality are integrated into a common platform in order to meet the intended space, cost, and timing requirements in all criticality levels. To guarantee the correct, and on-time execution of higher criticality tasks [...] Read more.
In Mixed-Criticality (MC) systems, multiple functions with different levels of criticality are integrated into a common platform in order to meet the intended space, cost, and timing requirements in all criticality levels. To guarantee the correct, and on-time execution of higher criticality tasks in emergency modes, various design-time scheduling policies have been recently presented. These techniques are mostly pessimistic, as the occurrence of worst-case scenario at run-time is a rare event. Nevertheless, they lead to an under-utilized system due to frequent drops of Low-Criticality (LC) tasks, and creation of unused slack times due to the quick execution of high-criticality tasks. Accordingly, this paper proposes a novel optimistic scheme, that introduces a learning-based drop-aware task scheduling mechanism, which carefully monitors the alterations in the behaviour of the MC system at run-time, to exploit the generated dynamic slacks for reducing the LC tasks penalty and preventing frequent drops of LC tasks in the future. Based on an extensive set of experiments, our observations have shown that the proposed approach exploits accumulated dynamic slack generated at run-time, by 9.84% more on average compared to existing works, and is able to reduce the deadline miss rate by up to 51.78%, and 33.27% on average, compared to state-of-the-art works. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop