Signal and Data Analysis
A section of Entropy (ISSN 1099-4300).
In 1972, John W. Tukey stated, “Data analysis has to analyze real data. Most real data call for data investigation, while almost all statistical theory is concerned with data processing. This can be borne, in part because large segments of data investigation are, by themselves, data processing”. Every domain today is generating huge amount of real data, from medicine to aeronautics, and education to agriculture. There is no sector or activity in which data are not being generated.
Data analysis refers to the entire process from raw data harvest to the conversion into knowledge. CRISP-DM is the de facto standard used to model the process through which the process transitions from business understanding to data preparation to modelling, in which algorithms either from artificial intelligence (AI), statistics, or other fields can be integrated searching for patterns.
Special attention must be placed on the data understanding and preparation phases of the process. Data are not collected to be analyzed; data are the result of other operational processes and data are provided in different formats and types.
Most of the information generated today is collected in unstructured format, such as texts or images from audio to video. All these data and signals require special preparation prior to the application of modelling techniques.
This Section focuses on original and new research results regarding the broad and fascinating field of data analysis. Thus, manuscripts are solicited on data cleansing, data wrangling, data modelling, signal processing, text processing, data mining, and their applications to either traditional sectors including marketing and finance or to other novel sectors such as health, manufacturing, agriculture, and space. Submissions providing critical up-to-date reviews are also welcome.
Special emphasis is placed on methods for the analysis of complex data with a focus on probability theory, information theory, or entropy. Statistical methods (basic, PCA, correlation, regression, etc.), network analysis (entropy, Granger causality, etc.), visualization, traditional AI methods (decision trees, random forests, support vector machine (SVM), boosting, bagging, etc.), stream analysis methods, databases approaches, clustering approaches (hierarchical, distance-based, etc.), knowledge graph analysis and ontologies to enrich semantics or deep learning, and the exploration of the relationship between these techniques and probability theory, information theory, and entropy are of special interest.
Topical Advisory Panel
Following special issues within this section are currently open for submissions:
- Applications of Topological Data Analysis in the Life Sciences (Deadline: 30 November 2021)
- Data Analytics in Sports Sciences: Changing the Game (Deadline: 30 November 2021)
- Challenges of Health Data Analytics (Deadline: 30 November 2021)
- Spatial–Temporal Data Analysis and Its Applications (Deadline: 30 November 2021)
- Information Theory in Emerging Biomedical Applications (Deadline: 15 December 2021)
- Physics-Based Machine and Deep Learning for PDE Models (Deadline: 22 December 2021)
- Information-Theoretic Foundations of Data Processing (Deadline: 31 December 2021)
- Methods in Artificial Intelligence and Information Processing (Deadline: 20 January 2022)
- Deep Learning Application on Visual Identity, Analysis, Diagnosis and Decision-Making (Deadline: 20 February 2022)
- Time-Frequency Analysis, AM-FM Models, and Mode Decompositions (Deadline: 28 February 2022)
- New and Improved Techniques of Information Theory for Quantum Chromodynamical Based Data (Deadline: 11 March 2022)
- Information Theory Based Methods in Machine Learning and Bioinformatics (Deadline: 25 March 2022)
- Wavelet Analysis for Data Science (Deadline: 31 March 2022)