Information Systems Modeling Based on Graph Theory

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 13779

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Informatics, Eötvös Loránd University, Budapest, Hungary
Interests: database; data and document management systems; big data; cloud; modeling of information systems; information theory

E-Mail Website
Guest Editor
Department of Information Engineering (DII), Università Politecnica delle Marche( Università degli Studi di Ancona), Ancona, Italy
Interests: social network analysis; biomedical engineering; data lakes; innovation management and internet of things

E-Mail Website
Guest Editor
Information Systems Department, Faculty of Informatics, Eötvös Loránd University, Budapest, Hungary
Interests: modeling of information systems using formal methods; methods for analyzing and designing information systems; expert systems; application of data science on the field of enterprise information systems

Special Issue Information

Dear Colleagues, 

We are pleased to announce this Special Issue of the journal Mathematics, entitled "Information Systems Modelling Based on Graph Theory." This initiative focuses on the topic of the application of graphs and graph theories in any aspect of information systems, including information system design and modeling in organizations and large systems of information collection in science, medicine, administration, etc.

We aim to include articles containing novel applications of theoretical results on different graph models, including ordinary graphs, hypergraphs, and any kind of labeled or colored digraphs.

Another trend in today's technology is IoT (the Internet of Things), which generates and supplies an enormous amount of data for information systems that apply recent technologies, such as data warehouses and data lakes.

The IoT networks, fog, and edge computing can be considered as systems where computing devices are interrelated and can exchange data between themselves over a communication network. These enormous amounts of data are collected and processed in information systems. The effective design of this complex architecture requires rigorous mathematical- based models, especially graph-based approaches for checking consistency, integrity, and security of models.

Enterprise information systems support and control operational business processes that embrace everything from simple internal back-office processes to complex inter-organizational processes. Technologies such as business process modelling and management, workflow management (WFM), enterprise application integration (EAI), enterprise resource planning (ERP), service-oriented architectures (SoA), and web services (WS) have a sound mathematical background that underlies the semi-formal, visual languages to represent business processes. The methods for Model-to-Program (M-2-P) exploits the fact that the descriptive languages are grounded in mathematics, especially various graph-based approaches. The algorithms that transform the representation of business processes to web services and executable programs rely on formal and graph-theoretic approaches to create reliable operational systems.

Process mining aims to extract information from event logs to capture the business process as it is being executed. To achieve effective and efficient interpretation of the data stored in logs and to exploit the recent methods of data science, one of the reasonable directions is to apply adequate graph representation and pattern matching methodologies.

This Special Issue welcomes theoretical and applied contributions that address graph-theoretic algorithms, technologies, and practices. This Special Issue is devoted to recent advances in information systems and application of graphs in the realm of information and data architecture and graph data models, including but not limited to those in the following areas:

  • Representations of business and data processing
  • The aspect of models in information systems
  • The architecture of information systems
  • Analyses of models represented by finite-state machines and automation
  • Knowledge representation and integration
  • Model transformation
  • Consistency checking of IS models (data and information architecture)
  • Graph databases in IS
  • Querying large models by graph algorithms
  • Applications of ranking graphs
  • Graph representation and logic languages
  • Applications of data science on graph representation of information systems
  • Analytics of real, large graphs and networks
  • Graph pattern search and matching
  • Semantic unification and harmonization
  • Social network analysis and its applications
  • Social Internet of Things (SioT) and its applications
  • Multiple Internet of Things (MIoT) and its applications
  • Horn logic and hypergraphs

As a response to the recent advancements, the objective of this Special Issue is to present a collection of notable methods and applications of graph theoretical approaches in the modeling of information systems. We invite scientists from all around the world to contribute to developing a comprehensive collection of papers on the progressive and high impact of modeling of information systems exploiting properties of various graph-based approaches.

We thus encourage you to send high-quality articles disseminating novel research achievements for this issue.

Prof. Dr. András Benczúr
Prof. Dr. Domenico Ursino
Prof. Dr. Bálint Molnár
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information systems
  • business data processing
  • finite state machines
  • knowledge representation
  • model transformation
  • graph databases
  • ranking graphs
  • logic languages
  • large networks
  • graph pattern matching
  • social networks
  • IoT
  • hypergraphs
  • graph theory
  • graph-algorithms
  • meta-graphs

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 3020 KiB  
Article
Algorithm for the Accelerated Calculation of Conceptual Distances in Large Knowledge Graphs
by Rolando Quintero, Esteban Mendiola, Giovanni Guzmán, Miguel Torres-Ruiz and Carlos Guzmán Sánchez-Mejorada
Mathematics 2023, 11(23), 4806; https://0-doi-org.brum.beds.ac.uk/10.3390/math11234806 - 28 Nov 2023
Viewed by 647
Abstract
Conceptual distance refers to the degree of proximity between two concepts within a conceptualization. It is closely related to semantic similarity and relationships, but its measurement strongly depends on the context of the given concepts. DIS-C represents an advancement in the computation of [...] Read more.
Conceptual distance refers to the degree of proximity between two concepts within a conceptualization. It is closely related to semantic similarity and relationships, but its measurement strongly depends on the context of the given concepts. DIS-C represents an advancement in the computation of semantic similarity/relationships that is independent of the type of knowledge structure and semantic relations when generating a graph from a knowledge base (ontologies, semantic networks, and hierarchies, among others). This approach determines the semantic similarity between two indirectly connected concepts in an ontology by propagating local distances by applying an algorithm based on the All Pairs Shortest Path (APSP) problem. This process is implemented for each pair of concepts to establish the most effective and efficient paths to connect these concepts. The algorithm identifies the shortest path between concepts, which allows for an inference of the most relevant relationships between them. However, one of the critical issues with this process is computational complexity, combined with the design of APSP algorithms, such as Dijkstra, which is 𝒪n3. This paper studies different alternatives to improve the DIS-C approach by adapting approximation algorithms, focusing on Dijkstra, pruned Dijkstra, and sketch-based methods, to compute the conceptual distance according to the need to scale DIS-C to analyze very large graphs; therefore, reducing the related computational complexity is critical. Tests were performed using different datasets to calculate the conceptual distance when using the original version of DIS-C and when using the influence area of nodes. In situations where time optimization is necessary for generating results, using the original DIS-C model is not the optimal method. Therefore, we propose a simplified version of DIS-C to calculate conceptual distances based on centrality estimation. The obtained results for the simple version of DIS-C indicated that the processing time decreased 2.381 times when compared to the original DIS-C version. Additionally, for both versions of DIS-C (normal and simple), the APSP algorithm decreased the computational cost when using a two-hop coverage-based approach. Full article
(This article belongs to the Special Issue Information Systems Modeling Based on Graph Theory)
Show Figures

Figure 1

14 pages, 9078 KiB  
Article
Attributed Graph Embedding Based on Attention with Cluster
by Bin Wang, Yu Chen, Jinfang Sheng and Zhengkun He
Mathematics 2022, 10(23), 4563; https://0-doi-org.brum.beds.ac.uk/10.3390/math10234563 - 01 Dec 2022
Cited by 2 | Viewed by 1521
Abstract
Graph embedding is of great significance for the research and analysis of graphs. Graph embedding aims to map nodes in the network to low-dimensional vectors while preserving information in the original graph of nodes. In recent years, the appearance of graph neural networks [...] Read more.
Graph embedding is of great significance for the research and analysis of graphs. Graph embedding aims to map nodes in the network to low-dimensional vectors while preserving information in the original graph of nodes. In recent years, the appearance of graph neural networks has significantly improved the accuracy of graph embedding. However, the influence of clusters was not considered in existing graph neural network (GNN)-based methods, so this paper proposes a new method to incorporate the influence of clusters into the generation of graph embedding. We use the attention mechanism to pass the message of the cluster pooled result and integrate the whole process into the graph autoencoder as the third layer of the encoder. The experimental results show that our model has made great improvement over the baseline methods in the node clustering and link prediction tasks, demonstrating that the embeddings generated by our model have excellent expressiveness. Full article
(This article belongs to the Special Issue Information Systems Modeling Based on Graph Theory)
Show Figures

Figure 1

47 pages, 8259 KiB  
Article
The Application of Directed Hyper-Graphs for Analysis of Models of Information Systems
by Bálint Molnár and András Benczúr
Mathematics 2022, 10(5), 759; https://0-doi-org.brum.beds.ac.uk/10.3390/math10050759 - 27 Feb 2022
Cited by 4 | Viewed by 2272
Abstract
Hyper-graphs offer the opportunity to formulate logical statements about their components, for example, using Horn clauses. Several models of Information Systems can be represented using hyper-graphs as the workflows, i.e., the business processes. During the modeling of Information Systems, many constraints should be [...] Read more.
Hyper-graphs offer the opportunity to formulate logical statements about their components, for example, using Horn clauses. Several models of Information Systems can be represented using hyper-graphs as the workflows, i.e., the business processes. During the modeling of Information Systems, many constraints should be maintained during the development process. The models of Information Systems are complex objects, for this reason, the analysis of algorithms and graph structures that can support the consistency and integrity of models is an essential issue. A set of interdependencies between models and components of architecture can be formulated by functional dependencies and can be investigated via algorithmic methods. Information Systems can be perceived as overarching documents that includes data collections; documents to be processed; and representations of business processes, activities, and services. Whe selecting and working out an appropriate method encoding of artifacts in Information Systems, the complex structure can be represented using hyper-graphs. This representation enables the application of various model-checking, verification, and validation tools that are based on formal approaches. This paper describes the proposed representations in different situations using hyper-graphs, moreover, the formal, algorithmic-based model-checking methods that are coupled with the representations. The model-checking methods are realized by algorithms that are grounded in graph-theoretical approaches and tailored to the specificity of hyper-graphs. Finally, the possible applications in a real-life enterprise environment are outlined. Full article
(This article belongs to the Special Issue Information Systems Modeling Based on Graph Theory)
Show Figures

Figure 1

13 pages, 2593 KiB  
Article
Network Embedding Algorithm Taking in Variational Graph AutoEncoder
by Dongming Chen, Mingshuo Nie, Hupo Zhang, Zhen Wang and Dongqi Wang
Mathematics 2022, 10(3), 485; https://0-doi-org.brum.beds.ac.uk/10.3390/math10030485 - 02 Feb 2022
Cited by 2 | Viewed by 1715
Abstract
Complex networks with node attribute information are employed to represent complex relationships between objects. Research of attributed network embedding fuses the topology and the node attribute information of the attributed network in the common latent representation space, to encode the high-dimensional sparse network [...] Read more.
Complex networks with node attribute information are employed to represent complex relationships between objects. Research of attributed network embedding fuses the topology and the node attribute information of the attributed network in the common latent representation space, to encode the high-dimensional sparse network information to the low-dimensional dense vector representation, effectively improving the performance of the network analysis tasks. The current research on attributed network embedding is presently facing problems of high-dimensional sparsity of attribute eigenmatrix and underutilization of attribute information. In this paper, we propose a network embedding algorithm taking in a variational graph autoencoder (NEAT-VGA). This algorithm first pre-processes the attribute features, i.e., the attribute feature learning of the network nodes. Then, the feature learning matrix and the adjacency matrix of the network are fed into the variational graph autoencoder algorithm to obtain the Gaussian distribution of the potential vectors, which more easily generate high-quality node embedding representation vectors. Then, the embedding of the nodes obtained by sampling this Gaussian distribution is reconstructed with structural and attribute losses. The loss function is minimized by iterative training until the low-dimension vector representation, containing network structure information and attribute information of nodes, can be better obtained, and the performance of the algorithm is evaluated by link prediction experimental results. Full article
(This article belongs to the Special Issue Information Systems Modeling Based on Graph Theory)
Show Figures

Figure 1

22 pages, 509 KiB  
Article
Conceptual Coverage Driven by Essential Concepts: A Formal Concept Analysis Approach
by Amira Mouakher, Axel Ragobert, Sébastien Gerin and Andrea Ko
Mathematics 2021, 9(21), 2694; https://0-doi-org.brum.beds.ac.uk/10.3390/math9212694 - 23 Oct 2021
Cited by 3 | Viewed by 1559
Abstract
Formal concept analysis (FCA) is a mathematical theory that is typically used as a knowledge representation method. The approach starts with an input binary relation specifying a set of objects and attributes, finds the natural groupings (formal concepts) described in the data, and [...] Read more.
Formal concept analysis (FCA) is a mathematical theory that is typically used as a knowledge representation method. The approach starts with an input binary relation specifying a set of objects and attributes, finds the natural groupings (formal concepts) described in the data, and then organizes the concepts in a partial order structure or concept (Galois) lattice. Unfortunately, the total number of concepts in this structure tends to grow exponentially as the size of the data increases. Therefore, there are numerous approaches for selecting a subset of concepts to provide full or partial coverage. In this paper, we rely on the battery of mathematical models offered by FCA to introduce a new greedy algorithm, called Concise, to compute minimal and meaningful subsets of concepts. Thanks to its theoretical properties, the Concise algorithm is shown to avoid the sluggishness of its competitors while offering the ability to mine both partial and full conceptual coverage of formal contexts. Furthermore, experiments on massive datasets also underscore the preservation of the quality of the mined formal concepts through interestingness measures agreed upon by the community. Full article
(This article belongs to the Special Issue Information Systems Modeling Based on Graph Theory)
Show Figures

Figure 1

21 pages, 1131 KiB  
Article
Efficient Processing of All Nearest Neighbor Queries in Dynamic Road Networks
by Aavash Bhandari, Aziz Hasanov, Muhammad Attique, Hyung-Ju Cho and Tae-Sun Chung
Mathematics 2021, 9(10), 1137; https://0-doi-org.brum.beds.ac.uk/10.3390/math9101137 - 17 May 2021
Cited by 2 | Viewed by 1865
Abstract
The increasing trend of GPS-enabled smartphones has led to the tremendous usage of Location-Based Service applications. In the past few years, a significant amount of studies have been conducted to process All nearest neighbor (ANN) queries. An ANN query on a road network [...] Read more.
The increasing trend of GPS-enabled smartphones has led to the tremendous usage of Location-Based Service applications. In the past few years, a significant amount of studies have been conducted to process All nearest neighbor (ANN) queries. An ANN query on a road network extracts and returns all the closest data objects for all query objects. Most of the existing studies on ANN queries are performed either in Euclidean space or static road networks. Moreover, combining the nearest neighbor query and join operation is an expensive procedure because it requires computing the distance between each pair of query objects and data objects. This study considers the problem of processing the ANN queries on a dynamic road network where the weight, i.e., the traveling distance and time varies due to various traffic conditions. To address this problem, a shared execution-based approach called standard clustered loop (SCL) is proposed that allows efficient processing of ANN queries on a dynamic road network. The key concept behind the shared execution technique is to exploit the coherence property of road networks by clustering objects that share common paths and processing the cluster as a single path. In an empirical study, the SCL method achieves significantly better performance than competitive methods and efficiently reduces the computational cost to process ANN queries in various problem settings. Full article
(This article belongs to the Special Issue Information Systems Modeling Based on Graph Theory)
Show Figures

Figure 1

17 pages, 3994 KiB  
Article
Frequent Itemset Mining and Multi-Layer Network-Based Analysis of RDF Databases
by Gergely Honti and János Abonyi
Mathematics 2021, 9(4), 450; https://0-doi-org.brum.beds.ac.uk/10.3390/math9040450 - 23 Feb 2021
Cited by 4 | Viewed by 1914
Abstract
Triplestores or resource description framework (RDF) stores are purpose-built databases used to organise, store and share data with context. Knowledge extraction from a large amount of interconnected data requires effective tools and methods to address the complexity and the underlying structure of semantic [...] Read more.
Triplestores or resource description framework (RDF) stores are purpose-built databases used to organise, store and share data with context. Knowledge extraction from a large amount of interconnected data requires effective tools and methods to address the complexity and the underlying structure of semantic information. We propose a method that generates an interpretable multilayered network from an RDF database. The method utilises frequent itemset mining (FIM) of the subjects, predicates and the objects of the RDF data, and automatically extracts informative subsets of the database for the analysis. The results are used to form layers in an analysable multidimensional network. The methodology enables a consistent, transparent, multi-aspect-oriented knowledge extraction from the linked dataset. To demonstrate the usability and effectiveness of the methodology, we analyse how the science of sustainability and climate change are structured using the Microsoft Academic Knowledge Graph. In the case study, the FIM forms networks of disciplines to reveal the significant interdisciplinary science communities in sustainability and climate change. The constructed multilayer network then enables an analysis of the significant disciplines and interdisciplinary scientific areas. To demonstrate the proposed knowledge extraction process, we search for interdisciplinary science communities and then measure and rank their multidisciplinary effects. The analysis identifies discipline similarities, pinpointing the similarity between atmospheric science and meteorology as well as between geomorphology and oceanography. The results confirm that frequent itemset mining provides an informative sampled subsets of RDF databases which can be simultaneously analysed as layers of a multilayer network. Full article
(This article belongs to the Special Issue Information Systems Modeling Based on Graph Theory)
Show Figures

Graphical abstract

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Application of Directed Hypergraphs and Horn Function for Analysis of Models of Information Systems
Authors: Kristóf Bérczi
Affiliation: MTA-ELTE Egerváry Research Group, Department of Operations Research, Eötvös Loránd University

Back to TopTop