Advanced in Artificial Intelligence and Cloud Computing

A special issue of Symmetry (ISSN 2073-8994). This special issue belongs to the section "Computer".

Deadline for manuscript submissions: closed (31 March 2018) | Viewed by 117249

Special Issue Editors

Dept. of Multimedia Engineering, Dongguk University, Seoul, Korea
Interests: artificial Intelligence; machine learning; cloud computing; information security
Department of Management-Management & Innovation Systems/DISA-MIS, University of Salerno, 84084 Fisciano, Italy
Interests: soft computing; agent technology to design technologically complex environments; intelligent agents

Special Issue Information

Dear Colleagues,

In the last decade, in the artificial intelligence field, knowledge bases have attracted tremendous interest from both academia and industry, and many large knowledge bases are now available. Cloud computing has emerged as today's most exciting computing paradigm for providing services using a shared framework, which opens a new door for solving the problems between the explosive growth of digital resource demands and the associated large costs. However, artificial intelligence and cloud computing also raise many research challenges, such as research management, data security, privacy, and trusted computing.

This Special Issue intends to address the current research challenges in artificial intelligence and seeks articles discussing cloud computing in businesses from various perspectives, such as design and development of new tools and techniques, comprehensive analytics, applications, intelligent decision making, and so forth. This Special Issue covers pure research and applications within the novel scopes related to artificial intelligence and cloud computing technologies, such as novel artificial algorithms, knowledge-based systems based on cloud computing, effective big data processing frameworks, and so forth. All submitted papers will be peer-reviewed and selected on the basis of both their quality and their relevance to the theme of this Special Issue. The topics of this Special Issue include, but are not limited to, the following areas:

  • Knowledge-based systems

  • Artificial Intelligence tools and applications

  • Multimedia and cognitive informatics

  • Neural networks

  • Natural language processing

  • Pattern recognition

  • Data mining and machine learning tools

  • Heuristic and AI planning strategies and tools

  • Computational theories of learning

  • Hybrid intelligent systems

  • Intelligent system architectures

  • Agent-based and multi-agent systems

  • Big data capture, representation and analytics

  • Constraint satisfaction, search and optimization

  • Data mining and knowledge discovery

  • Machine learning and application

  • Model-based systems

  • Multidisciplinary AI

  • AI for robotics

  • Cloud computing architecture and systems

  • Cloud computing models, simulations, designs, and paradigms

  • Cloud management and operations

  • Dynamic resource provision and consuming

  • Cloud computing technologies, services and applications

  • Security evaluation and benchmarks based on cloud

  • Security policy, security theory and models based on big data

Prof. Dr. Yunsick Sung
Prof. Dr. Vincenzo Loia
Prof. Dr. Neil Y. Yen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Symmetry is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence

  • Big Data

  • Machine Learning

  • Data Mining

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 6741 KiB  
Article
A Prototype of Speech Interface Based on the Google Cloud Platform to Access a Semantic Website
by Jimmy Aurelio Rosales-Huamaní, José Luis Castillo-Sequera, Juan Carlos Montalvan-Figueroa and Joseps Andrade-Choque
Symmetry 2018, 10(7), 268; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10070268 - 08 Jul 2018
Cited by 4 | Viewed by 4319
Abstract
The main restriction of the Semantic Web is the difficulty of the SPARQL language, which is necessary for extracting information from the Knowledge Representation also known as ontology. Making the Semantic Web accessible for people who do not know SPARQL is essential for [...] Read more.
The main restriction of the Semantic Web is the difficulty of the SPARQL language, which is necessary for extracting information from the Knowledge Representation also known as ontology. Making the Semantic Web accessible for people who do not know SPARQL is essential for the use of friendlier interfaces, and a good alternative is Natural Language. This paper shows the implementation of a friendly prototype interface activated by voice to query and retrieving information from websites built with Semantic Web tools. In that way, the end users avoid the complicated SPARQL language. To achieve this, the interface recognizes a speech query and converts it into text, it processes the text through a Java program and identifies keywords, generates a SPARQL query, extracts the information from the website and reads it in a voice for the user. In our work, Google Cloud Speech API makes Speech-to-Text conversions and Text-to Speech conversions are made with SVOX Pico. As a result, we have measured three variables: the success rate in queries, the response time of query and a usability survey. The values of the variables allow the evaluation of our prototype. Finally, the interface proposed provides us with a new approach in the problem, using the Cloud like a Service, reducing barriers of access to the Semantic Web for people without technical knowledge of Semantic Web technologies. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

18 pages, 1940 KiB  
Article
An Online Algorithm for Dynamic NFV Placement in Cloud-Based Autonomous Response Networks
by Leonardo Ochoa-Aday, Cristina Cervelló-Pastor, Adriana Fernández-Fernández and Paola Grosso
Symmetry 2018, 10(5), 163; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10050163 - 15 May 2018
Cited by 16 | Viewed by 4387
Abstract
Autonomous response networks are becoming a reality thanks to recent advances in cloud computing, Network Function Virtualization (NFV) and Software-Defined Networking (SDN) technologies. These enhanced networks fully enable autonomous real-time management of virtualized infrastructures. In this context, one of the major challenges is [...] Read more.
Autonomous response networks are becoming a reality thanks to recent advances in cloud computing, Network Function Virtualization (NFV) and Software-Defined Networking (SDN) technologies. These enhanced networks fully enable autonomous real-time management of virtualized infrastructures. In this context, one of the major challenges is how virtualized network resources can be effectively placed. Although this issue has been addressed before in cloud-based environments, it is not yet completely resolved for the online placement of virtual machines. For such a purpose, this paper proposes an online heuristic algorithm called Topology-Aware Placement of Virtual Network Functions (TAP-VNF) as a low-complexity solution for such dynamic infrastructures. As a complement, we provide a general formulation of the network function placement using the service function chaining concept. Furthermore, two metrics called consolidation and aggregation validate the efficiency of the proposal in the experimental simulations. We have compared our approach with optimal solutions, in terms of consolidation and aggregation ratios, showing a more suitable performance for dynamic cloud-based environments. The obtained results show that TAP-VNF also outperforms existing approaches based on traditional bin packing schemes. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

16 pages, 3872 KiB  
Article
An Intelligent Improvement of Internet-Wide Scan Engine for Fast Discovery of Vulnerable IoT Devices
by Hwankuk Kim, Taeun Kim and Daeil Jang
Symmetry 2018, 10(5), 151; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10050151 - 10 May 2018
Cited by 17 | Viewed by 6355
Abstract
Since 2016, Mirai and Persirai malware have infected hundreds of thousands of Internet of Things (IoT) devices and created a massive IoT botnet, which caused distributed denial of service (DDoS) attacks. IoT malware targets vulnerable IoT devices, which are vulnerable to security risks. [...] Read more.
Since 2016, Mirai and Persirai malware have infected hundreds of thousands of Internet of Things (IoT) devices and created a massive IoT botnet, which caused distributed denial of service (DDoS) attacks. IoT malware targets vulnerable IoT devices, which are vulnerable to security risks. Techniques are needed to prevent IoT devices from being exploited by attackers. However, unlike high-performance PCs, IoT devices are lightweight, low-power, and low-cost, having performance limitations regarding processing and memory, which makes it difficult to install security and anti-malware programs. Recently, several studies have been attempted to quickly search for vulnerable internet-connected devices to solve this real issue. Issues yet to be studied still exist regarding these types of internet-wide scan technologies, such as filtering by security devices and a shortage of collected operating system (OS) information. This paper proposes an intelligent internet-wide scan model that improves IP state scanning with advanced internet protocol (IP) randomization, reactive protocol (port) scanning, and OS fingerprinting scanning, applying k* algorithm in order to find vulnerable IoT devices. Additionally, we describe the experiment’s results compared to the existing internet-wide scan technologies, such as ZMap and Shodan. As a result, the proposed model experimentally shows improved performance. Although we improved the ZMap, the throughput per minute (TPM) performance is similar to ZMap without degrading the IP scan throughput and the performance of generating a single IP address is about 118% better than ZMap. In the protocol scan performance experiments, it is about 129% better than the Censys based ZMap, and the performance of OS fingerprinting is better than ZMap, with about 50% accuracy. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

10 pages, 7321 KiB  
Article
Enhancing Data Transfer Performance Utilizing a DTN between Cloud Service Providers
by Wontaek Hong, Jeonghoon Moon, Woojin Seok and Jinwook Chung
Symmetry 2018, 10(4), 110; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10040110 - 16 Apr 2018
Cited by 3 | Viewed by 3797
Abstract
The rapid transfer of massive data in the cloud environment is required to prepare for unexpected situations like disaster recovery. With regard to this requirement, we propose a new approach to transferring cloud virtual machine images rapidly in the cloud environment utilizing dedicated [...] Read more.
The rapid transfer of massive data in the cloud environment is required to prepare for unexpected situations like disaster recovery. With regard to this requirement, we propose a new approach to transferring cloud virtual machine images rapidly in the cloud environment utilizing dedicated Data Transfer Nodes (DTNs). The overall procedure is composed of local/remote copy processes and a DTN-to-DTN transfer process. These processes are coordinated and executed based on a fork system call in the proposed algorithm. In addition, we especially focus on the local copy process between a cloud controller and DTNs and improve data transfer performance through the well-tuned mount techniques in Network File System (NFS)-based connections. Several experiments have been performed considering the combination of synchronous/asynchronous modes and the network buffer size. We show the results of throughput in all the experiment cases and compare them. Consequently, the best throughput in write operations has been obtained in the case of an NFS server in a DTN and an NFS client in a cloud controller running entirely in the asynchronous mode. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

13 pages, 8032 KiB  
Article
Applying Genetic Programming with Similar Bug Fix Information to Automatic Fault Repair
by Geunseok Yang, Youngjun Jeong, Kyeongsic Min, Jung-won Lee and Byungjeong Lee
Symmetry 2018, 10(4), 92; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10040092 - 02 Apr 2018
Cited by 5 | Viewed by 4036
Abstract
Owing to the high complexity of recent software products, developers cannot avoid major/minor mistakes, and software bugs are generated during the software development process. When developers manually modify a program source code using bug descriptions to fix bugs, their daily workloads and costs [...] Read more.
Owing to the high complexity of recent software products, developers cannot avoid major/minor mistakes, and software bugs are generated during the software development process. When developers manually modify a program source code using bug descriptions to fix bugs, their daily workloads and costs increase. Therefore, we need a way to reduce their workloads and costs. In this paper, we propose a novel automatic fault repair method by using similar bug fix information based on genetic programming (GP). First, we searched for similar buggy source codes related to the new given buggy code, and then we searched for a fixed the buggy code related to the most similar source code. Next, we transformed the fixed code into abstract syntax trees for applying GP and generated the candidate program patches. In this step, we verified the candidate patches by using a fitness function based on given test cases to determine whether the patch was valid or not. Finally, we produced program patches to fix the new given buggy code. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

15 pages, 44060 KiB  
Article
Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System
by Phuong Minh Chu, Seoungjae Cho, Sungdae Sim, Kiho Kwak and Kyungeun Cho
Symmetry 2018, 10(4), 83; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10040083 - 28 Mar 2018
Cited by 5 | Viewed by 3627
Abstract
Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the [...] Read more.
Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

14 pages, 6841 KiB  
Article
A Robust Image Watermarking Technique Based on DWT, APDCBT, and SVD
by Xiao Zhou, Heng Zhang and Chengyou Wang
Symmetry 2018, 10(3), 77; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10030077 - 19 Mar 2018
Cited by 77 | Viewed by 8511
Abstract
Copyright protection for digital multimedia has become a research hotspot in recent years. As an efficient solution, the digital watermarking scheme has emerged at the right moment. In this article, a highly robust and hybrid watermarking method is proposed. The discrete wavelet transform [...] Read more.
Copyright protection for digital multimedia has become a research hotspot in recent years. As an efficient solution, the digital watermarking scheme has emerged at the right moment. In this article, a highly robust and hybrid watermarking method is proposed. The discrete wavelet transform (DWT) and all phase discrete cosine biorthogonal transform (APDCBT) presented in recent years as well as the singular value decomposition (SVD) are adopted in this method to insert and recover the watermark. To enhance the watermark imperceptibility, the direct current (DC) coefficients after block-based APDCBT in high frequency sub-bands (LH and HL) are modified by using the watermark. Compared with the conventional SVD-based watermarking method and another watermarking technique, the watermarked images obtained by the proposed method have higher image quality. In addition, the proposed method achieves high robustness in resisting various image processing attacks. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

18 pages, 2699 KiB  
Article
Vision-Based Parking-Slot Detection: A Benchmark and A Learning-Based Approach
by Lin Zhang, Xiyuan Li, Junhao Huang, Ying Shen and Dongqing Wang
Symmetry 2018, 10(3), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10030064 - 13 Mar 2018
Cited by 21 | Viewed by 8104
Abstract
Recent years have witnessed a growing interest in developing automatic parking systems in the field of intelligent vehicles. However, how to effectively and efficiently locating parking-slots using a vision-based system is still an unresolved issue. Even more seriously, there is no publicly available [...] Read more.
Recent years have witnessed a growing interest in developing automatic parking systems in the field of intelligent vehicles. However, how to effectively and efficiently locating parking-slots using a vision-based system is still an unresolved issue. Even more seriously, there is no publicly available labeled benchmark dataset for tuning and testing parking-slot detection algorithms. In this paper, we attempt to fill the above-mentioned research gaps to some extent and our contributions are twofold. Firstly, to facilitate the study of vision-based parking-slot detection, a large-scale parking-slot image database is established. This database comprises 8600 surround-view images collected from typical indoor and outdoor parking sites. For each image in this database, the marking-points and parking-slots are carefully labeled. Such a database can serve as a benchmark to design and validate parking-slot detection algorithms. Secondly, a learning-based parking-slot detection approach, namely P S D L , is proposed. Using P S D L , given a surround-view image, the marking-points will be detected first and then the valid parking-slots can be inferred. The efficacy and efficiency of P S D L have been corroborated on our database. It is expected that P S D L can serve as a baseline when the other researchers develop more sophisticated methods. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

16 pages, 1388 KiB  
Article
A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments
by JongBeom Lim, Joon-Min Gil and HeonChang Yu
Symmetry 2018, 10(1), 30; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10010030 - 17 Jan 2018
Cited by 3 | Viewed by 5517
Abstract
Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should [...] Read more.
Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

1911 KiB  
Article
Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing
by Yan Li and Byeong-Seok Shin
Symmetry 2017, 9(12), 311; https://0-doi-org.brum.beds.ac.uk/10.3390/sym9120311 - 10 Dec 2017
Cited by 4 | Viewed by 3734
Abstract
With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In [...] Read more.
With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR)-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

1484 KiB  
Article
Qinling: A Parametric Model in Speculative Multithreading
by Yuxiang Li, Yinliang Zhao and Bin Liu
Symmetry 2017, 9(9), 180; https://0-doi-org.brum.beds.ac.uk/10.3390/sym9090180 - 02 Sep 2017
Cited by 2 | Viewed by 4888
Abstract
Speculative multithreading (SpMT) is a thread-level automatic parallelization technique that can accelerate sequential programs, especially for irregular applications that are hard to be parallelized by conventional approaches. Thread partition plays a critical role in SpMT. Conventional machine learning-based thread partition approaches applied machine [...] Read more.
Speculative multithreading (SpMT) is a thread-level automatic parallelization technique that can accelerate sequential programs, especially for irregular applications that are hard to be parallelized by conventional approaches. Thread partition plays a critical role in SpMT. Conventional machine learning-based thread partition approaches applied machine learning to offline guide partition, but could not explicitly explore the law between partition and performance. In this paper, we build a parametric model (Qinling) with a multiple regression method to discover the inherent law between thread partition and performance. The paper firstly extracts unpredictable parameters that determine the performance of thread partition in SpMT; secondly, we build a parametric model Qinling with extracted parameters and speedups, and train Qinling offline, as well as apply it to predict the theoretical speedups of unseen applications. Finally, validation is done. Prophet, which consists of an automatic parallelization compiler and a multi-core simulator, is used to obtain real speedups of the input programs. Olden and SPEC2000 benchmarks are used to train and validate the parametric model. Experiments show that Qinling delivers a good performance to predict speedups of unseen programs, and provides feedback guidance for Prophet to obtain the optimal partition parameters. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

2190 KiB  
Article
Blockchain Security in Cloud Computing: Use Cases, Challenges, and Solutions
by Jin Ho Park and Jong Hyuk Park
Symmetry 2017, 9(8), 164; https://0-doi-org.brum.beds.ac.uk/10.3390/sym9080164 - 18 Aug 2017
Cited by 227 | Viewed by 30678
Abstract
Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global [...] Read more.
Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

8535 KiB  
Article
Recognition of Traffic Sign Based on Bag-of-Words and Artificial Neural Network
by Kh Tohidul Islam, Ram Gopal Raj and Ghulam Mujtaba
Symmetry 2017, 9(8), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/sym9080138 - 30 Jul 2017
Cited by 31 | Viewed by 9452
Abstract
The traffic sign recognition system is a support system that can be useful to give notification and warning to drivers. It may be effective for traffic conditions on the current road traffic system. A robust artificial intelligence based traffic sign recognition system can [...] Read more.
The traffic sign recognition system is a support system that can be useful to give notification and warning to drivers. It may be effective for traffic conditions on the current road traffic system. A robust artificial intelligence based traffic sign recognition system can support the driver and significantly reduce driving risk and injury. It performs by recognizing and interpreting various traffic sign using vision-based information. This study aims to recognize the well-maintained, un-maintained, standard, and non-standard traffic signs using the Bag-of-Words and the Artificial Neural Network techniques. This research work employs a Bag-of-Words model on the Speeded Up Robust Features descriptors of the road traffic signs. A robust classifier Artificial Neural Network has been employed to recognize the traffic sign in its respective class. The proposed system has been trained and tested to determine the suitable neural network architecture. The experimental results showed high accuracy of classification of traffic signs including complex background images. The proposed traffic sign detection and recognition system obtained 99.00% classification accuracy with a 1.00% false positive rate. For real-time implementation and deployment, this marginal false positive rate may increase reliability and stability of the proposed system. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

2462 KiB  
Article
Model to Implement Virtual Computing Labs via Cloud Computing Services
by Washington Luna Encalada and José Luis Castillo Sequera
Symmetry 2017, 9(7), 117; https://0-doi-org.brum.beds.ac.uk/10.3390/sym9070117 - 13 Jul 2017
Cited by 19 | Viewed by 7902
Abstract
In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own [...] Read more.
In recent years, we have seen a significant number of new technological ideas appearing in literature discussing the future of education. For example, E-learning, cloud computing, social networking, virtual laboratories, virtual realities, virtual worlds, massive open online courses (MOOCs), and bring your own device (BYOD) are all new concepts of immersive and global education that have emerged in educational literature. One of the greatest challenges presented to e-learning solutions is the reproduction of the benefits of an educational institution’s physical laboratory. For a university without a computing lab, to obtain hands-on IT training with software, operating systems, networks, servers, storage, and cloud computing similar to that which could be received on a university campus computing lab, it is necessary to use a combination of technological tools. Such teaching tools must promote the transmission of knowledge, encourage interaction and collaboration, and ensure students obtain valuable hands-on experience. That, in turn, allows the universities to focus more on teaching and research activities than on the implementation and configuration of complex physical systems. In this article, we present a model for implementing ecosystems which allow universities to teach practical Information Technology (IT) skills. The model utilizes what is called a “social cloud”, which utilizes all cloud computing services, such as Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Additionally, it integrates the cloud learning aspects of a MOOC and several aspects of social networking and support. Social clouds have striking benefits such as centrality, ease of use, scalability, and ubiquity, providing a superior learning environment when compared to that of a simple physical lab. The proposed model allows students to foster all the educational pillars such as learning to know, learning to be, learning to live together, and, primarily, learning to do, through hands-on IT training from a MOOCs. An aspect of the model has been verified experimentally and statistically through a course of computer operating systems. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 4283 KiB  
Review
A Review of Image Processing Techniques Common in Human and Plant Disease Diagnosis
by Nikos Petrellis
Symmetry 2018, 10(7), 270; https://0-doi-org.brum.beds.ac.uk/10.3390/sym10070270 - 09 Jul 2018
Cited by 28 | Viewed by 8777
Abstract
Image processing has been extensively used in various (human, animal, plant) disease diagnosis approaches, assisting experts to select the right treatment. It has been applied to both images captured from cameras of visible light and from equipment that captures information in invisible wavelengths [...] Read more.
Image processing has been extensively used in various (human, animal, plant) disease diagnosis approaches, assisting experts to select the right treatment. It has been applied to both images captured from cameras of visible light and from equipment that captures information in invisible wavelengths (magnetic/ultrasonic sensors, microscopes, etc.). In most of the referenced diagnosis applications, the image is enhanced by various filtering methods and segmentation follows isolating the regions of interest. Classification of the input image is performed at the final stage. The disease diagnosis approaches based on these steps and the common methods are described. The features extracted from a plant/skin disease diagnosis framework developed by the author are used here to demonstrate various techniques adopted in the literature. The various metrics along with the available experimental conditions and results presented in the referenced approaches are also discussed. The accuracy achieved in the diagnosis methods that are based on image processing is often higher than 90%. The motivation for this review is to highlight the most common and efficient methods that have been employed in various disease diagnosis approaches and suggest how they can be used in similar or different applications. Full article
(This article belongs to the Special Issue Advanced in Artificial Intelligence and Cloud Computing)
Show Figures

Figure 1

Back to TopTop