New Advances in Securing Data and Big Data

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Databases and Data Structures".

Deadline for manuscript submissions: closed (31 January 2022) | Viewed by 2665

Special Issue Editors

College of Information and Communication Technology (ICT), Centre for Intelligent Systems, School of Engineering and Technology, Central Queensland University, Sydney, NSW 2000, Australia
Interests: virtual reality in education, big data and artificial intelligence; deep learning; expert systems; business intelligence; real time analytics, legacy system modernization and Internet of Things (IoT)
Special Issues, Collections and Topics in MDPI journals
Department of Computing Sciences & Cyber Security Institute, University of Houston, Houston, TX 77058, USA
Interests: big data mining and analytics; text mining; internet of things; cyber-physical systems; edge computing; computer networks; security and privacy

E-Mail Website
Guest Editor
German Centre for Higher Education Research and Science Studies Lange Laube 12, 30159 Hannover, Germany
Interests: information systems and databases; big data management, business intelligence; data quality management; artificial intelligence; data science; internet of things
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Storing, analysing, and accessing data is a growing problem for organisations. Competitive pressures and new regulations are requiring organisations to efficiently handle increasing volumes and varieties of data, but this does not come cheap. Data sets grow rapidly

in part because they are increasingly gathered by cheap and numerous information-sensing Internet of things devices such as mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks. Data are analyzed in order to derive business-relevant findings and support business decisions. In order for the derived results to be valid and valuable, it is essential to ensure the quality of the underlying data. Tools, Techniques, Methods and Algorithms can help identify the data quality for different applications. Data quality, integrity and security are of prime importance in deriving any results. Information systems, artificial intelligence (AI), visualization, and statistics have a wide range of applications on data usage. For example, many organisations are faced with the challenge of transferring processes, IT systems and data structures to integate artificial intelligence technologies. The benefits of artificial intelligence such as machine learning, deep learning, and knowledge graphs enable the fast and format-free connection of different data sources simply by storing translation tables. This allows organisations to start with small set of data sources to systematically set up AI and data separately from one another, and develop and connect additional data sources in quick sprints. This enables quick success with existing data and systems, which minimizes risks and shows the benefits of AI after short development cycles.

We invite you to submit high-quality contributions to this special issue on “New Advances in Securing Data and Big Data”, the topics of which cover the entire range from theory to application:

  • Data Quality 
  • Big Data 
  • Artificial Intelligence Algorithms 
  • Data Mining and Text Mining 
  • Scientific Knowledge Graph 
  • Securing Big Data 
  • Big Data Processing and Analytics 
  • Modeling data and Process Quality 
  • Machine learning for Data Analysis

Dr. Meena Jha
Dr. Kewei Sha
Dr. Otmane Azeroual
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Data Quality 
  • Big Data 
  • Artificial Intelligence Algorithms 
  • Data Mining and Text Mining 
  • Scientific Knowledge Graph 
  • Securing Big Data 
  • Big Data Processing and Analytics 
  • Modeling data and Process Quality 
  • Machine learning for Data Analysis

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 1821 KiB  
Article
Efficient and Portable Distribution Modeling for Large-Scale Scientific Data Processing with Data-Parallel Primitives
by Hao-Yi Yang, Zhi-Rong Lin and Ko-Chih Wang
Algorithms 2021, 14(10), 285; https://0-doi-org.brum.beds.ac.uk/10.3390/a14100285 - 29 Sep 2021
Cited by 1 | Viewed by 1798
Abstract
The use of distribution-based data representation to handle large-scale scientific datasets is a promising approach. Distribution-based approaches often transform a scientific dataset into many distributions, each of which is calculated from a small number of samples. Most of the proposed parallel algorithms focus [...] Read more.
The use of distribution-based data representation to handle large-scale scientific datasets is a promising approach. Distribution-based approaches often transform a scientific dataset into many distributions, each of which is calculated from a small number of samples. Most of the proposed parallel algorithms focus on modeling single distributions from many input samples efficiently, but these may not fit the large-scale scientific data processing scenario because they cannot utilize computing resources effectively. Histograms and the Gaussian Mixture Model (GMM) are the most popular distribution representations used to model scientific datasets. Therefore, we propose the use of multi-set histogram and GMM modeling algorithms for the scenario of large-scale scientific data processing. Our algorithms are developed by data-parallel primitives to achieve portability across different hardware architectures. We evaluate the performance of the proposed algorithms in detail and demonstrate use cases for scientific data processing. Full article
(This article belongs to the Special Issue New Advances in Securing Data and Big Data)
Show Figures

Figure 1

Back to TopTop