Advance in Digital Signal, Image and Video Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (10 March 2023) | Viewed by 28903

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Narutowicza 11/12, 80-233 Gdansk, Poland
Interests: audio; broadcasting; coding; compression; mobile technologies; multimedia; positioning; signal processing; speech processing; video; wireless communication
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Economics and Transport Engineering, Maritime University of Szczecin, ul. Waly Chrobrego 1-2, 70-500 Szczecin, Poland
Interests: data compression; feature selection; image denoising; image resolution; Internet technologies; IP networks; quality evaluation; video coding; video streaming

E-Mail Website
Guest Editor
Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology, Narutowicza 11/12, 80-233 Gdansk, Poland
Interests: acoustics; GIS; hydroacoustics; multibeam sonar; object reconstruction; remote sensing; seabed classification; satellite communication; signal processing

Special Issue Information

Dear Colleagues,

It is assumed that high performance and quality, especially in the case of signal transmission, will lead to great acceptance and usability. However, with the outbreak of numerous digital systems and services, including audio and video processing, low quality or best effort services have gained enormous popularity. This fact is observed particularly in the case of Internet and mobile technologies, for example, content storing and management, multimedia consumption, and digital maps and images. It is also clearly visible in the growth of the number of users and available applications. Moreover, it shows that the relationship between performance, quality and acceptance is not fully recognized.

The term quality can be understood in many different ways. Engineers perceive the term as quality of service (QoS), which is a synonym for network performance and reliability. However, quality can be defined from a person’s point of view. Quality of experience (QoE), which involves the process of comparing perceptual events with a known reference, is focused on defining the characteristics of media transmission systems or services and their acceptance by customers. As has been observed with the outbreak of desktop and mobile platforms, the number of multimedia services and highly capable consumer devices continues to grow. This forces a broader analysis of numerous practical aspects. This issue aims to describe interdisciplinary advances in digital signal, image, and video processing.

In this Special Issue, we invite the scientific community to publish works highlighting recent advancements in digital signal, image, and video processing.

The topics of interest include but are not limited to:

  • Broadcasting and electronic media;
  • Coding and compression;
  • Content storing and management;
  • Geographic information system (GIS);
  • Internet technology;
  • Image and video signals;
  • Mobile technologies;
  • Multimedia processing;
  • Quality of experience (QoE);
  • Quality of service (QoS);
  • Speech and music signals;
  • Signal processing;
  • Streaming services;
  • Subjective and objective metrics;
  • User experience (UX).

Dr. Przemysław Falkowski-Gilski
Prof. Dr. Tadeus Uhl
Prof. Dr. Zbigniew Łubniewski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 164 KiB  
Editorial
Special Issue on Advance in Digital Signal, Image and Video Processing
by Przemysław Falkowski-Gilski, Tadeus Uhl and Zbigniew Łubniewski
Appl. Sci. 2023, 13(13), 7642; https://0-doi-org.brum.beds.ac.uk/10.3390/app13137642 - 28 Jun 2023
Viewed by 530
Abstract
It is assumed that high performance and quality, especially in the case of signal transmission, will lead to great acceptance and usability [...] Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)

Research

Jump to: Editorial

14 pages, 7972 KiB  
Article
HCI-Based Wireless System for Measuring the Concentration of Mining Machinery and Equipment Operators
by Jerzy Jagoda, Mariusz Woszczyński, Bartosz Polnik and Przemysław Falkowski-Gilski
Appl. Sci. 2023, 13(9), 5396; https://0-doi-org.brum.beds.ac.uk/10.3390/app13095396 - 26 Apr 2023
Cited by 1 | Viewed by 961
Abstract
Maintaining stable and reliable working conditions is a matter of vital importance for various companies, especially those involving heavy machinery. Due to human exhaustion, as well as unpredicted hazards and dangerous situations, the personnel has to take actions and wisely plan each move. [...] Read more.
Maintaining stable and reliable working conditions is a matter of vital importance for various companies, especially those involving heavy machinery. Due to human exhaustion, as well as unpredicted hazards and dangerous situations, the personnel has to take actions and wisely plan each move. This paper presents a human–computer interaction (HCI)-based system that uses a concentration level measurement function to increase the safety of machine and equipment operators. The system has been developed in response to the results of user experience (UX) analyses of the state of occupational safety, which indicate that the most common cause of accidents is the so-called insufficient concentration while performing work. The paper presents the reasons for addressing this issue and a description of the proposed electroencephalography (EEG)-based solution in the form of a concentration measurement system concept. We discuss in-field measurements of such a prototype solution, together with an analysis of obtained results. The method of implementing a wireless communication interface is also provided, along with a visualization application. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

17 pages, 362 KiB  
Article
Antivirus Evasion Methods in Modern Operating Systems
by Dominik Samociuk
Appl. Sci. 2023, 13(8), 5083; https://0-doi-org.brum.beds.ac.uk/10.3390/app13085083 - 19 Apr 2023
Cited by 3 | Viewed by 4845
Abstract
In order to safeguard one’s privacy while accessing the internet, it is crucial to have an antivirus program installed on the device. Despite their usefulness in protecting against malware, these programs are not foolproof. Cybercriminals have access to numerous techniques and tools for [...] Read more.
In order to safeguard one’s privacy while accessing the internet, it is crucial to have an antivirus program installed on the device. Despite their usefulness in protecting against malware, these programs are not foolproof. Cybercriminals have access to numerous techniques and tools for circumventing antivirus software, which can greatly aid them in their illicit activities. The objective of this research was to examine the most prevalent methods and tools for bypassing antivirus software and to demonstrate how readily accessible and simple they are to use. The aim of this paper is to raise awareness among readers about the associated risks and to assist internet users in protecting themselves from potential threats. The findings of the research indicate that the efficacy of evasion tools is positively correlated with their age and popularity. Tests have shown that, with the latest updates, contemporary antivirus software is capable of resisting virtually all of the tested methods generated using default settings. However, the most significant aspect of this paper is the section presenting experiments with basic but powerful modifications to established evasion mechanisms, which have been found to deceive modern, up-to-date antivirus software. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

21 pages, 6208 KiB  
Article
Assessment of the Quality of Video Sequences Performed by Viewers at Home and in the Laboratory
by Janusz Klink, Stefan Brachmański and Michał Łuczyński
Appl. Sci. 2023, 13(8), 5025; https://doi.org/10.3390/app13085025 - 17 Apr 2023
Cited by 2 | Viewed by 946
Abstract
The paper presents the results of subjective and objective quality assessments of H.264-, H.265-, and VP9-encoded video. Most of the literature is devoted to subjective quality assessment in well-defined laboratory circumstances. However, the end users usually watch the films in their home environments, [...] Read more.
The paper presents the results of subjective and objective quality assessments of H.264-, H.265-, and VP9-encoded video. Most of the literature is devoted to subjective quality assessment in well-defined laboratory circumstances. However, the end users usually watch the films in their home environments, which may be different from the conditions recommended for laboratory measurements. This may cause significant differences in the quality assessment scores. Thus, the aim of the research is to show the impact of environmental conditions on the video quality perceived by the user. The subjective assessment was made in two different environments: in the laboratory and in users’ homes, where people often watch movies on their laptops. The video signal was assessed by young viewers who were not experts in the field of quality assessment. The tests were performed taking into account different image resolutions and different bit rates. The research showed strong correlations between the obtained results and the coding bit rates used, and revealed a significant difference between the quality scores obtained in the laboratory and at home. As a conclusion, it must be underlined that the laboratory tests are necessary for comparative purposes, while the assessment of the video quality experienced by end users should be performed under circumstances that are as close as possible to the user’s home environment. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

16 pages, 1536 KiB  
Article
A Nonuniformity Correction Method Based on 1D Guided Filtering and Linear Fitting for High-Resolution Infrared Scan Images
by Bohan Li, Weicong Chen and Yong Zhang
Appl. Sci. 2023, 13(6), 3890; https://0-doi-org.brum.beds.ac.uk/10.3390/app13063890 - 18 Mar 2023
Cited by 2 | Viewed by 1363
Abstract
During imaging, each infrared focal plane linear array scan detector detection unit determines a row of pixels in the image output. This sensor’s nonuniformity appears as horizontal stripes. Correcting nonuniformity in high-resolution images without destroying delicate details is challenging. In this paper, a [...] Read more.
During imaging, each infrared focal plane linear array scan detector detection unit determines a row of pixels in the image output. This sensor’s nonuniformity appears as horizontal stripes. Correcting nonuniformity in high-resolution images without destroying delicate details is challenging. In this paper, a single-frame-based nonuniformity correction algorithm is proposed. A portion of a single-frame picture is intercepted initially. The 1D column guided filter is applied to smooth the captured image in the vertical direction. Then, the smooth image and high-frequency component with horizontal stripes and texture information are obtained. The subsequent step is to use the smooth portion of the image as the guided image and the high-frequency portion of the image as the input, so that the estimated nonuniformity noise of the image may be extracted using a 1D row guided filter. The segment of the corrected image is then obtained by subtracting the estimated nonuniformity noise from the segment of the raw image. The correction coefficients could be obtained by performing a linear regression fit on the pre- and post-guided filtering image segments. With the correction coefficients, the entire image could be corrected. Based on qualitative and quantitative analysis, the proposed algorithm outperforms other current advanced algorithms in terms of nonuniformity correction and real-time performance. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

14 pages, 3816 KiB  
Article
A Nighttime and Daytime Single-Image Dehazing Method
by Yunqing Tang, Yin Xiang and Guangfeng Chen
Appl. Sci. 2023, 13(1), 255; https://0-doi-org.brum.beds.ac.uk/10.3390/app13010255 - 25 Dec 2022
Cited by 1 | Viewed by 1581
Abstract
In this study, the requirements for image dehazing methods have been put forward, such as a wider range of scenarios in which the methods can be used, faster processing speeds and higher image quality. Recent dehazing methods can only unilaterally process daytime or [...] Read more.
In this study, the requirements for image dehazing methods have been put forward, such as a wider range of scenarios in which the methods can be used, faster processing speeds and higher image quality. Recent dehazing methods can only unilaterally process daytime or nighttime hazy images. However, we propose an effective single-image technique, dubbed MF Dehazer, in order to solve the problems associated with nighttime and daytime dehazing. This technique was developed following an in-depth analysis of the properties of nighttime hazy images. We also propose a mixed-filter method in order to estimate ambient illumination. It is possible to obtain the color and light direction when estimating ambient illumination. Usually, after dehazing, nighttime images will cause light source diffusion problems. Thus, we propose a method to compensate for the high-light area transmission in order to improve the transmission of the light source areas. Then, through regularization, the images obtain better contrast. The experimental results show that MF Dehazer outperforms the recent dehazing methods. Additionally, it can obtain images with higher contrast and clarity while retaining the original color of the image. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

23 pages, 1560 KiB  
Article
Quality Assessment of Dual-Parallel Edge Deblocking Filter Architecture for HEVC/H.265
by Prayline Rajabai Christopher and Sivanantham Sathasivam
Appl. Sci. 2022, 12(24), 12952; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412952 - 16 Dec 2022
Cited by 1 | Viewed by 1249
Abstract
Preserving the visual quality is a major constraint for any algorithm in image and video processing applications. AVC and HEVC are the extensively used video coding standards for various video processing applications in recent days. These coding standards use filters to preserve the [...] Read more.
Preserving the visual quality is a major constraint for any algorithm in image and video processing applications. AVC and HEVC are the extensively used video coding standards for various video processing applications in recent days. These coding standards use filters to preserve the visual quality of the processed video. To retain the quality of the reconstructed video, AVC uses an in-loop filter, called the deblocking filter, while HEVC uses two in-loop filters, the sampling adaptive offset filter and the deblocking filter. These filters are implemented in hardware by adopting various optimization techniques such as reduction of power utilization, reduction of algorithm complexity, and consuming lesser area. The quality of the reconstructed video should not be impacted by these optimization measures. For the HEVC/H.265 coding standard, a parallel edge deblocking filter architecture is designed, and the effectiveness of the parallel edge filter architecture is evaluated using various quantization values for various resolutions. The quality of the parallel edge filter architecture is on par with the HEVC reference model. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

20 pages, 6723 KiB  
Article
Neural-Network-Assisted Polar Code Decoding Schemes
by Hengyan Liu, Limin Zhang, Wenjun Yan and Qing Ling
Appl. Sci. 2022, 12(24), 12700; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412700 - 11 Dec 2022
Cited by 1 | Viewed by 1467
Abstract
The traditional fast successive-cancellation (SC) decoding algorithm can effectively reduce the decoding steps, but the decoding adopts a sub-optimal algorithm, so it cannot improve the bit error performance. In order to improve the bit error performance while maintaining low decoding steps, we introduce [...] Read more.
The traditional fast successive-cancellation (SC) decoding algorithm can effectively reduce the decoding steps, but the decoding adopts a sub-optimal algorithm, so it cannot improve the bit error performance. In order to improve the bit error performance while maintaining low decoding steps, we introduce a neural network subcode that can achieve optimal decoding performance and combine it with the traditional fast SC decoding algorithm. While exploring how to combine neural network node (NNN) with R1, R0, single-parity checks (SPC), and Rep, we find that the decoding failed sometimes when the NNN was not the last subcode. To solve the problem, we propose two neural network-assisted decoding schemes: a key-bit-based subcode NN-assisted decoding (KSNNAD) scheme and a last subcode NN-assisted decoding (LSNNAD) scheme. The LSNNAD scheme recognizes the last subcode as an NNN, and the NNN with nearly optimal decoding performance gives rise to some performance improvements. To further improve performance, the KSNNAD scheme recognizes the subcode with a key bit as an NNN and changes the training data and label accordingly. Computer simulation results confirm that the two schemes can effectively reduce the decoding steps, and their bit error rates (BERs) are lower than those of the successive-cancellation decoder (SCD). Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

20 pages, 3140 KiB  
Article
Performance Evaluation of a Multidomain IMS/NGN Network Including Service and Transport Stratum
by Sylwester Kaczmarek and Maciej Sac
Appl. Sci. 2022, 12(22), 11643; https://0-doi-org.brum.beds.ac.uk/10.3390/app122211643 - 16 Nov 2022
Cited by 2 | Viewed by 938
Abstract
The Next Generation Network (NGN) architecture was proposed for delivering various multimedia services with guaranteed quality. For this reason, the elements of the IP Multimedia Subsystem (IMS) concept (an important part of 4G/5G/6G mobile networks) are used in its service stratum. This paper [...] Read more.
The Next Generation Network (NGN) architecture was proposed for delivering various multimedia services with guaranteed quality. For this reason, the elements of the IP Multimedia Subsystem (IMS) concept (an important part of 4G/5G/6G mobile networks) are used in its service stratum. This paper presents comprehensive research on how the parameters of an IMS/NGN network and traffic sources influence mean Call Set-up Delay (E(CSD)) and mean Call Disengagement Delay (E(CDD)), a subset of standardized call processing performance (CPP) parameters, which are significant for both network users and operators. The investigations were performed using our analytical traffic model of a multidomain IMS/NGN network with Multiprotocol Label Switching (MPLS) technology applied in its transport stratum, which provides transport resources for the services requested by users. The performed experiments allow grouping network and traffic source parameters into three categories based on the strength of their effect on E(CSD) and E(CDD). These categories reflect the significance of particular parameters for the network operator and designer (most important, less important and insignificant). Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

17 pages, 3800 KiB  
Article
Distinction of Scrambled Linear Block Codes Based on Extraction of Correlation Features
by Jiyuan Tan, Limin Zhang and Zhaogen Zhong
Appl. Sci. 2022, 12(21), 11305; https://0-doi-org.brum.beds.ac.uk/10.3390/app122111305 - 07 Nov 2022
Cited by 1 | Viewed by 1457
Abstract
Aiming to solve the problem of the distinction of scrambled linear block codes, a method for identifying the scrambling types of linear block codes by combining correlation features and convolution long short-term memory neural networks is proposed in this paper. First, the cross-correlation [...] Read more.
Aiming to solve the problem of the distinction of scrambled linear block codes, a method for identifying the scrambling types of linear block codes by combining correlation features and convolution long short-term memory neural networks is proposed in this paper. First, the cross-correlation characteristics of the scrambling sequence symbols are deduced, the partial autocorrelation function is constructed, the superiority of the partial autocorrelation function is determined by derivation, and the two are combined as the input correlation characteristics. A shallow network combining a convolutional neural network and LSTM is constructed; finally, the linear block code scrambled dataset is input into the network model, and the training and recognition test of the network is completed. The simulation results show that, compared with the traditional algorithm based on a multi-fractal spectrum, the proposed method can identify a synchronous scrambler, and the recognition accuracy is higher under a high bit error rate. Moreover, the method is suitable for classification under noise. The proposed method lays a foundation for future improvements in scrambler parameter identification. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

18 pages, 1018 KiB  
Article
Reliable Integrity Preservation Analysis of Video Contents with Support of Blockchain Systems
by Wan Yeon Lee and Yun-Seok Choi
Appl. Sci. 2022, 12(20), 10280; https://0-doi-org.brum.beds.ac.uk/10.3390/app122010280 - 12 Oct 2022
Cited by 2 | Viewed by 1357
Abstract
In this article, we propose an integrity preservation analysis scheme of video contents working on the blockchain systems. The proposed scheme stores the core points of video contents analysis in the blockchain system permanently so that any user can easily verify the results [...] Read more.
In this article, we propose an integrity preservation analysis scheme of video contents working on the blockchain systems. The proposed scheme stores the core points of video contents analysis in the blockchain system permanently so that any user can easily verify the results of the proposed analysis procedure and their reliability. The scheme first examines the codec software characteristics of digital camera devices and video editing tools in advance, and stores them in the blockchain systems. Next, the scheme extracts the codec software characteristic from the target video file and compares it with the prepared characteristics in the blockchain system. With a matched characteristic, the scheme finds out the source camera device or the source video editing tool that generates the target video file. We also propose an integrity preservation trace scheme to record the transformation history of video contents into the blockchain systems. This scheme compares the original video and its transformed video frame by frame, and stores the comparison result with a hash value of the transformed video in the blockchain system. Then, the integrity analysis and transformation history of the target file can be easily searched in the blockchain system, where the hash value of the video file is used as the index of searching operation. We implement the proposed scheme into a practical tool upon a commercial blockchain system, Klaytn. Experimental results show that the proposed scheme carries out the integrity analysis of video contents with 100% accuracy, and provides the transformation history of non-original video contents with 100% accuracy when a proper parameter is given. It is also shown that the proposed scheme completes the integrity analysis within at most one second, and the search operation for transformation history within at most four seconds. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

19 pages, 523 KiB  
Article
Improving Streaming Video with Deep Learning-Based Network Throughput Prediction
by Arkadiusz Biernacki
Appl. Sci. 2022, 12(20), 10274; https://0-doi-org.brum.beds.ac.uk/10.3390/app122010274 - 12 Oct 2022
Cited by 3 | Viewed by 1844
Abstract
Video streaming represents a significant part of Internet traffic. During the playback, a video player monitors network throughput and dynamically selects the best video quality in given network conditions. Therefore, the video quality depends heavily on the player’s estimation of network throughput, which [...] Read more.
Video streaming represents a significant part of Internet traffic. During the playback, a video player monitors network throughput and dynamically selects the best video quality in given network conditions. Therefore, the video quality depends heavily on the player’s estimation of network throughput, which is challenging in the volatile environment of mobile networks. In this work, we improved the throughput estimation using prediction produced by LSTM artificial neural networks (ANNs). Hence, we acquired data traces from 4G and 5G mobile networks and supplied them to two deep LSTM ANNs, obtaining a throughput prediction for the next four seconds. Our analysis showed that the ANNs achieved better prediction accuracy compared to a naive predictor based on a moving average. Next, we replaced the video player’s default throughput estimation based on the naive predictor with the LSTM output. The experiment revealed that the traffic prediction improved video quality between 5% and 25% compared to the default estimation. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

12 pages, 2305 KiB  
Article
Application of Transfer Learning and Convolutional Neural Networks for Autonomous Oil Sheen Monitoring
by Jialin Dong, Katherine Sitler, Joseph Scalia, Yunhao Ge, Paul Bireta, Natasha Sihota, Thomas P. Hoelen and Gregory V. Lowry
Appl. Sci. 2022, 12(17), 8865; https://0-doi-org.brum.beds.ac.uk/10.3390/app12178865 - 03 Sep 2022
Cited by 2 | Viewed by 1850
Abstract
Oil sheen on the water surface can indicate a source of hydrocarbon in underlying subaquatic sediments. Here, we develop and test the accuracy of an algorithm for automated real-time visual monitoring of the water surface for detecting oil sheen. This detection system is [...] Read more.
Oil sheen on the water surface can indicate a source of hydrocarbon in underlying subaquatic sediments. Here, we develop and test the accuracy of an algorithm for automated real-time visual monitoring of the water surface for detecting oil sheen. This detection system is part of an automated oil sheen screening system (OS-SS) that disturbs subaquatic sediments and monitors for the formation of sheen. We first created a new near-surface oil sheen image dataset. We then used this dataset to develop an image-based Oil Sheen Prediction Neural Network (OS-Net), a classification machine learning model based on a convolutional neural network (CNN), to predict the existence of oil sheen on the water surface from images. We explored the effectiveness of different strategies of transfer learning to improve the model accuracy. The performance of OS-Net and the oil detection accuracy reached up to 99% on a test dataset. Because the OS-SS uses video to monitor for sheen, we also created a real-time video-based oil sheen prediction algorithm (VOS-Net) to deploy in the OS-SS to autonomously map the spatial distribution of sheening potential of hydrocarbon-impacted subaquatic sediments. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

23 pages, 663 KiB  
Article
A Case Study on Implementing Agile Techniques and Practices: Rationale, Benefits, Barriers and Business Implications for Hardware Development
by Paweł Weichbroth
Appl. Sci. 2022, 12(17), 8457; https://0-doi-org.brum.beds.ac.uk/10.3390/app12178457 - 24 Aug 2022
Cited by 7 | Viewed by 4390
Abstract
Agile methodologies, along with the corresponding tools and practices, are claimed to facilitate teams in managing their work more effectively and conducting their work more efficiently while fostering the highest quality product within the constraints of the budget. Therefore, the rate of awareness [...] Read more.
Agile methodologies, along with the corresponding tools and practices, are claimed to facilitate teams in managing their work more effectively and conducting their work more efficiently while fostering the highest quality product within the constraints of the budget. Therefore, the rate of awareness and adoption of Agile frameworks both within and outside the software industry has increased significantly. Yet, the latest studies show that the adoption of Agile techniques and practices are not one-size-fits-all, and highlight the challenges, risks, and limitations regarding numerous domains. In this regard, the state-of-the-art literature provides comprehensive reading. However, in the case of hardware manufacturing, it seems to be sparse and fragmented. To fill this gap, the goal of this study is to analyze and present an in-depth account of the implementation of mix agile-oriented tools and practices. To tackle this goal, a single industry case study was undertaken, based on the primary data obtained through the interview protocol and the secondary data extracted from the project’s documentation. The findings concern three areas. First, the rationale behind the implementation of agile for hardware development is explained. Second, the implemented agile techniques and practices are identified, as well as the supporting tools through which their adoption was successfully undertaken. Third, the areas positively impacted by their application are highlighted with the corresponding evaluation measures deployed; moreover, the barriers to adopting Agile practices encountered, and the benefits gained from particular techniques, are further discussed. The presented findings might be of great importance for both researchers and practitioners who are searching for empirical evidence regarding Agile-oriented implementations. Finally, in terms of both benefits and barriers, business implications for hardware development are formulated. Alongside this, numerous open issues and questions present interesting research avenues that concern, in particular, the effectiveness of collaboration and areas of communication through the lens of agile techniques and practices. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

18 pages, 2642 KiB  
Article
Tool Wear Monitoring Using Improved Dragonfly Optimization Algorithm and Deep Belief Network
by Leo Gertrude David, Raj Kumar Patra, Przemysław Falkowski-Gilski, Parameshachari Bidare Divakarachari and Lourdusamy Jegan Antony Marcilin
Appl. Sci. 2022, 12(16), 8130; https://0-doi-org.brum.beds.ac.uk/10.3390/app12168130 - 14 Aug 2022
Cited by 10 | Viewed by 1556
Abstract
In recent decades, tool wear monitoring has played a crucial role in the improvement of industrial production quality and efficiency. In the machining process, it is important to predict both tool cost and life, and to reduce the equipment downtime. The conventional methods [...] Read more.
In recent decades, tool wear monitoring has played a crucial role in the improvement of industrial production quality and efficiency. In the machining process, it is important to predict both tool cost and life, and to reduce the equipment downtime. The conventional methods need enormous quantities of human resources and expert skills to achieve precise tool wear information. To automatically identify the tool wear types, deep learning models are extensively used in the existing studies. In this manuscript, a new model is proposed for the effective classification of both serviceable and worn cutting edges. Initially, a dataset is chosen for experimental analysis that includes 254 images of edge profile cutting heads; then, circular Hough transform, canny edge detector, and standard Hough transform are used to segment 577 cutting edge images, where 276 images are disposable and 301 are functional. Furthermore, feature extraction is carried out on the segmented images utilizing Local Binary Pattern (LBPs) and Speeded up Robust Features (SURF), Harris Corner Detection (HCD), Histogram of Oriented Gradients (HOG), and Grey-Level Co-occurrence Matrix (GLCM) feature descriptors for extracting the texture feature vectors. Next, the dimension of the extracted features is reduced by an Improved Dragonfly Optimization Algorithm (IDOA) that lowers the computational complexity and running time of the Deep Belief Network (DBN), while classifying the serviceable and worn cutting edges. The experimental evaluations showed that the IDOA-DBN model attained 98.83% accuracy on the patch configuration of full edge division, which is superior to the existing deep learning models. Full article
(This article belongs to the Special Issue Advance in Digital Signal, Image and Video Processing)
Show Figures

Figure 1

Back to TopTop