Next Issue
Volume 10, August
Previous Issue
Volume 10, June
 
 

Computers, Volume 10, Issue 7 (July 2021) – 7 articles

Cover Story (view full-size image): Product lifecycle management (PLM) accompanies a product in all of its lifecycle phases, from the idea generation to the recycling. Numerous tools and data models are used in this process today, which poses a significant challenge on its efficient digitalized management. The asset administration shell (AAS) aims to be a standardized digital representation of products. In accordance with its objective, it has the potential to integrate all of the data generated during the product’s lifecycle into a single data model and to provide a universally valid PLM interface. This research examines and evaluates this potential. It demonstrates the application of the AAS in an order-controlled production process, where engineering data are provided to the production process via the AAS. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 1413 KiB  
Review
Distributed Interoperable Records: The Key to Better Supply Chain Management
by Annegret Henninger and Atefeh Mashatan
Computers 2021, 10(7), 89; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10070089 - 19 Jul 2021
Cited by 8 | Viewed by 4155
Abstract
The global supply chain is a network of interconnected processes that create, use, and exchange records, but which were not designed to interact with one another. As such, the key to unlocking the full potential of supply chain management (SCM) technologies is achieving [...] Read more.
The global supply chain is a network of interconnected processes that create, use, and exchange records, but which were not designed to interact with one another. As such, the key to unlocking the full potential of supply chain management (SCM) technologies is achieving interoperability across participating records systems and networks. We review existing research and solutions using distributed ledger technology (DLT) and provide a survey of its current state of practice. We additionally propose a holistic solution: a DLT-based interoperable future state that could enable the interoperable, efficient, reliable, and secure exchange of records with integrity. Finally, we provide a gap analysis between our proposed future state and the current state, which also serves as a gap analysis for many fractional DLT-based SCM solutions and research. Full article
(This article belongs to the Special Issue Blockchain Technology and Recordkeeping)
Show Figures

Figure 1

14 pages, 973 KiB  
Article
Using Autoencoders for Anomaly Detection and Transfer Learning in IoT
by Chin-Wei Tien, Tse-Yung Huang, Ping-Chun Chen and Jenq-Haur Wang
Computers 2021, 10(7), 88; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10070088 - 15 Jul 2021
Cited by 17 | Viewed by 4112
Abstract
With the development of Internet of Things (IoT) technologies, more and more smart devices are connected to the Internet. Since these devices were designed for better connections with each other, very limited security mechanisms have been considered. It would be costly to develop [...] Read more.
With the development of Internet of Things (IoT) technologies, more and more smart devices are connected to the Internet. Since these devices were designed for better connections with each other, very limited security mechanisms have been considered. It would be costly to develop separate security mechanisms for the diverse behaviors in different devices. Given new and changing devices and attacks, it would be helpful if the characteristics of diverse device types could be dynamically learned for better protection. In this paper, we propose a machine learning approach to device type identification through network traffic analysis for anomaly detection in IoT. Firstly, the characteristics of different device types are learned from their generated network packets using supervised learning methods. Secondly, by learning important features from selected device types, we further compare the effects of unsupervised learning methods including One-class SVM, Isolation forest, and autoencoders for dimensionality reduction. Finally, we evaluate the performance of anomaly detection by transfer learning with autoencoders. In our experiments on real data in the target factory, the best performance of device type identification can be achieved by XGBoost with an accuracy of 97.6%. When adopting autoencoders for learning features from the network packets in Modbus TCP protocol, the best F1 score of 98.36% can be achieved. Comparable performance of anomaly detection can be achieved when using autoencoders for transfer learning from the reference dataset in the literature to our target site. This shows the potential of the proposed approach for automatic anomaly detection in smart factories. Further investigation is needed to verify the proposed approach using different types of devices in different IoT environments. Full article
Show Figures

Figure 1

17 pages, 1015 KiB  
Article
A Low Distortion Audio Self-Recovery Algorithm Robust to Discordant Size Content Replacement Attack
by Juan Jose Gomez-Ricardez and Jose Juan Garcia-Hernandez
Computers 2021, 10(7), 87; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10070087 - 14 Jul 2021
Cited by 1 | Viewed by 1726
Abstract
Although the development of watermarking techniques has enabled designers to tackle normal processing attacks (e.g., amplitude scaling, noise addition, re-compression), robustness against malicious attacks remains a challenge. The discordant size content replacement attack is an attack against watermarking schemes which performs content replacement [...] Read more.
Although the development of watermarking techniques has enabled designers to tackle normal processing attacks (e.g., amplitude scaling, noise addition, re-compression), robustness against malicious attacks remains a challenge. The discordant size content replacement attack is an attack against watermarking schemes which performs content replacement that increases or reduces the number of samples in the signal. This attack modifies the content and length of the signal, as well as desynchronizes the position of the watermark and its removal. In this paper, a source-channel coding approach for protecting an audio signal against this attack was applied. Before applying the source-channel encoding, a decimation technique was performed to reduce by one-half the number of samples in the original signal. This technique allowed compressing at a bit rate of 64 kbps and obtaining a watermarked audio signal with an excellent quality scale. In the watermark restoration, an interpolation was applied after the source-channel decoding to recover the content and the length. The procedure of decimation–interpolation was taken because it is a linear and time-invariant operation and is useful in digital audio. A synchronization strategy was designed to detect the positions where the number of samples in the signal was increased or reduced. The restoration ability of the proposed scheme was tested with a mathematical model of the discordant size content replacement attack. The attack model confirmed that it is necessary to design a synchronizing strategy to correctly extract the watermark and to recover the tampered signal. Experimental results show that the scheme has better restoration ability than state-of-the-art schemes. The scheme was able to restore a tampered area of around 20% with very good quality, and up to 58.3% with acceptable quality. The robustness against the discordant size content replacement attack was achieved with a transparency threshold above 2. Full article
Show Figures

Figure 1

24 pages, 34431 KiB  
Article
Separable Reversible Data Hiding in Encryption Image with Two-Tuples Coding
by Jijun Wang and Soo Fun Tan
Computers 2021, 10(7), 86; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10070086 - 07 Jul 2021
Cited by 2 | Viewed by 2133
Abstract
Separable Reversible Data Hiding in Encryption Image (RDH-EI) has become widely used in clinical and military applications, social cloud and security surveillance in recent years, contributing significantly to preserving the privacy of digital images. Aiming to address the shortcomings of recent works that [...] Read more.
Separable Reversible Data Hiding in Encryption Image (RDH-EI) has become widely used in clinical and military applications, social cloud and security surveillance in recent years, contributing significantly to preserving the privacy of digital images. Aiming to address the shortcomings of recent works that directed to achieve high embedding rate by compensating image quality, security, reversible and separable properties, we propose a two-tuples coding method by considering the intrinsic adjacent pixels characteristics of the carrier image, which have a high redundancy between high-order bits. Subsequently, we construct RDH-EI scheme by using high-order bits compression, low-order bits combination, vacancy filling, data embedding and pixel diffusion. Unlike the conventional RDH-EI practices, which have suffered from the deterioration of the original image while embedding additional data, the content owner in our scheme generates the embeddable space in advance, thus lessening the risk of image destruction on the data hider side. The experimental results indicate the effectiveness of our scheme. A ratio of 28.91% effectively compressed the carrier images, and the embedding rate increased to 1.753 bpp with a higher image quality, measured in the PSNR of 45.76 dB. Full article
Show Figures

Figure 1

14 pages, 10760 KiB  
Article
Prototyping a Smart Contract Based Public Procurement to Fight Corruption
by Tim Weingärtner, Danielle Batista, Sandro Köchli and Gilles Voutat
Computers 2021, 10(7), 85; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10070085 - 01 Jul 2021
Cited by 14 | Viewed by 5254
Abstract
Corruption in public procurement is a worldwide appearance that causes immense financial and reputational damages. Especially in developing countries, corruption is a widespread issue due to secrecy and lack of transparency. An important instrument for transparency and accountability assurance is the record which [...] Read more.
Corruption in public procurement is a worldwide appearance that causes immense financial and reputational damages. Especially in developing countries, corruption is a widespread issue due to secrecy and lack of transparency. An important instrument for transparency and accountability assurance is the record which is managed and controlled by recordkeeping systems. Blockchain technology and more precisely blockchain-based smart contracts are emerging technological tools that can be used as recordkeeping systems and a tool to mitigate some of the fraud involving public procurement records. Immutability, transparency, distribution and automation are some of the features of smart contracts already implemented in several applications to avoid malicious human interference. In this paper, we discuss some of the frauds in public procurement, and we propose smart contracts to automatize different stages of the public procurement procedure attempting to fix their biggest current weaknesses. The processes we have focused on include the bidding process, supplier habilitation and delivery verification. In the three subprocesses, common irregularities include human fallibility, improper information disclosure and hidden agreements which concern not only governments but also civil society. To show the feasibility and usability of our proposal, we have implemented a prototype that demonstrates the process using sample data. Full article
(This article belongs to the Special Issue Blockchain Technology and Recordkeeping)
Show Figures

Figure 1

18 pages, 3005 KiB  
Article
Product Lifecycle Management with the Asset Administration Shell
by Andreas Deuter and Sebastian Imort
Computers 2021, 10(7), 84; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10070084 - 23 Jun 2021
Cited by 11 | Viewed by 4264
Abstract
Product lifecycle management (PLM) as a holistic process encompasses the idea generation for a product, its conception, and its production, as well as its operating phase. Numerous tools and data models are used throughout this process. In recent years, industry and academia have [...] Read more.
Product lifecycle management (PLM) as a holistic process encompasses the idea generation for a product, its conception, and its production, as well as its operating phase. Numerous tools and data models are used throughout this process. In recent years, industry and academia have developed integration concepts to realize efficient PLM across all domains and phases. However, the solutions available in practice need specific interfaces and tend to be vendor dependent. The Asset Administration Shell (AAS) aims to be a standardized digital representation of an asset (e.g., a product). In accordance with its objective, it has the potential to integrate all data generated during the PLM process into one data model and to provide a universally valid interface for all PLM phases. However, to date, there is no holistic concept that demonstrates this potential. The goal of this research work is to develop and validate such an AAS-based concept. This article demonstrates the application of the AAS in an order-controlled production process, including the semi-automatic generation of PLM-related AAS data. Furthermore, it discusses the potential of the AAS as a standard interface providing a smooth data integration throughout the PLM process. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2020)
Show Figures

Figure 1

17 pages, 429 KiB  
Article
The Use of Template Miners and Encryption in Log Message Compression
by Péter Marjai, Péter Lehotay-Kéry and Attila Kiss
Computers 2021, 10(7), 83; https://0-doi-org.brum.beds.ac.uk/10.3390/computers10070083 - 23 Jun 2021
Cited by 3 | Viewed by 2078
Abstract
Presently, almost every computer software produces many log messages based on events and activities during the usage of the software. These files contain valuable runtime information that can be used in a variety of applications such as anomaly detection, error prediction, template mining, [...] Read more.
Presently, almost every computer software produces many log messages based on events and activities during the usage of the software. These files contain valuable runtime information that can be used in a variety of applications such as anomaly detection, error prediction, template mining, and so on. Usually, the generated log messages are raw, which means they have an unstructured format. This indicates that these messages have to be parsed before data mining models can be applied. After parsing, template miners can be applied on the data to retrieve the events occurring in the log file. These events are made from two parts, the template, which is the fixed part and is the same for all instances of the same event type, and the parameter part, which varies for all the instances. To decrease the size of the log messages, we use the mined templates to build a dictionary for the events, and only store the dictionary, the event ID, and the parameter list. We use six template miners to acquire the templates namely IPLoM, LenMa, LogMine, Spell, Drain, and MoLFI. In this paper, we evaluate the compression capacity of our dictionary method with the use of these algorithms. Since parameters could be sensitive information, we also encrypt the files after compression and measure the changes in file size. We also examine the speed of the log miner algorithms. Based on our experiments, LenMa has the best compression rate with an average of 67.4%; however, because of its high runtime, we would suggest the combination of our dictionary method with IPLoM and FFX, since it is the fastest of all methods, and it has a 57.7% compression rate. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop