Digital doi: 10.3390/digital4010014
Authors: Athanasios Evagelou Alexandros Kleftodimos Georgios Lappas
Augmented reality (AR) applications are currently used in many fields for communication and educational purposes. Tourism is also a sector where augmented reality is used for destination marketing and cultural heritage promotion. This study will focus on mobile location-based AR applications and their potential in tourism. Such applications can guide tourists to places of interest and enhance their overall experience. The aim of this paper is to present a mobile application that was created for tourists visiting the region of Western Macedonia, Greece. The application was developed in order to guide the users in the region, entertain them, and educate them about the region’s sights, cultural heritage, and other special characteristics. The paper also aims to present a large set of features that are present in the application, including various types of AR (marker-based, markerless, and location-based) in order to provide designers who wish to create AR applications for tourism with new ideas. The application was validated by a usability test, and its features were evaluated by 39 participants who completed a questionnaire with 29 Likert-scale items. This procedure revealed the level of acceptance for the application features, and valuable feedback was also received during a discussion with the participants about how the application could be upgraded in the future.
]]>Digital doi: 10.3390/digital4010013
Authors: Camille Velasco Lim Yu-Peng Zhu Muhammad Omar Han-Woo Park
Although artificial intelligence technologies have provided valuable insights into the advertising industry, more comprehensive studies that properly examine the applications of AI in advertising using scientometric network analysis are needed. Using publications from journals indexed in the Web of Science, we seek to analyze the emergence of AI through the examination of keyword co-occurrences and co-authorship. Our goal is to identify essential concepts and influential research that have significantly impacted the advertising business. The findings highlight noteworthy patterns, indicating the growing importance of machine learning tools and techniques such as deep learning, and advanced natural language processing methods like word2vec, GANs, and others, as well as their societal impacts as they continue to define the future of advertising practices.
]]>Digital doi: 10.3390/digital4010012
Authors: Daiju Kato Hiroshi Ishikawa
Since the Software Quality Model was defined as an international standard, many quality assurance teams have used this quality model in a waterfall model for software development and quality control. As more software is delivered as a cloud service, various methodologies have been created with an awareness of the link between development productivity and operations, enabling faster development. However, most development methods are development-oriented with awareness of development progress, and there has been little consideration of methods that achieve quality orientation for continuous quality improvement and monitoring. Therefore, we developed a method to visualize the progress of software quality during development by defining quality goals in the project charter using the quality model defined in international standards, classifying each test by quality characteristics, and clarifying the quality ensured by each test. This was achieved by classifying each test by quality characteristics and clarifying the quality ensured by each test. To use quality characteristics as KPIs, it is necessary to manage test results for each test type and compare them with past build results. This paper explains how to visualize the quality to be assured and the benefits of using quality characteristics as KPIs and proposes a method to achieve rapid and high-quality product development.
]]>Digital doi: 10.3390/digital4010011
Authors: Markos Konstantakis Georgios Trichopoulos John Aliprantis Nikitas Gavogiannis Anna Karagianni Panos Parthenios Konstantinos Serraos George Caridakis
The paper introduces an innovative methodology that combines photogrammetry and laser scanning techniques to create detailed 3D models of historic mansions within the Kifissia region of Attica, Greece. While photogrammetry excels in capturing intricate textures, it faces challenges such as lighting variations and precise image alignment. On the other hand, laser scanning offers precision in capturing geometric details but struggles with reflective surfaces and large datasets. Our study integrates these methods to leverage their strengths and address limitations, resulting in comprehensive and accurate digital twins of cultural spaces. The methodology section outlines the step-by-step process of integration, emphasizing solutions to specific challenges encountered in the study area. Preliminary results showcase the enhanced fidelity and completeness of the digital twins, demonstrating the effectiveness of the combined approach. The subsequent sections of the paper delve into a detailed presentation of the methodology, provide a comprehensive analysis of obtained results, and discuss the implications of this innovative approach in cultural preservation and broader applications.
]]>Digital doi: 10.3390/digital4010010
Authors: Eleni Korosidou
This study aspires to contribute some initial results to the growing area of research regarding technology potential in the field of early foreign language literacy. An experiment was conducted to examine very young learners’ alphabet and vocabulary learning and retention in an early foreign language (FL) learning context when implementing augmented reality (AR) applications, while very young learners’ motivation was also assessed. A pilot intervention was implemented in a state school in northern Greece. The participants (n = 26) were primary school first-graders (5.5–6 years old) and were assigned into two groups, experimental (13) and control (13). To examine the effects of the intervention, this current study employed two instruments: (a) a pre-test–post-test model to assess young learners’ alphabet and vocabulary learning during three phases and (b) a questionnaire to assess their motivation during the learning process. The findings of this study reveal that both groups displayed significant improvements in FL alphabet and vocabulary learning; however, there are statistical differences in favor of the experimental group regarding long-term alphabet and vocabulary learning and retention. Furthermore, qualitative results regarding children’s perceptions of the technology used indicate that AR was highly appealing and motivating to participating students.
]]>Digital doi: 10.3390/digital4010009
Authors: Bartłomiej Hadasik Maria Mach-Król
The COVID-19 pandemic led to widespread restrictions globally, prompting governments to implement measures for containment. Vaccines, while aiding in reducing virus transmission, have also introduced the challenge of identifying vaccinated individuals for the purpose of easing restrictions. The European Union (EU) addressed this through the “digital COVID-19 certification” system, allowing citizens to travel within the EU based on their vaccination, recovery, or negative test status. However, the system’s digital format poses challenges for those who are not digitally proficient, such as seniors and those with low educational or socioeconomic status. This study aims to propose enhancements to the current system, considering the mobility needs of all citizens. The methodology involves reviewing literature on digital literacy, the digital divide, and information systems related to vaccination and certification. The paper presents straightforward recommendations to make the COVID-19 certificate more accessible to digitally excluded individuals. These proposals may serve as a valuable starting point for healthcare executives to evaluate and adapt the certification scheme to be inclusive of a broader range of stakeholders.
]]>Digital doi: 10.3390/digital4010008
Authors: Ângela Leite Anabela Rodrigues Ana Margarida Ribeiro Sílvia Lopes
The aim of this study is to assess whether social media addiction contributes to the intention to buy; it is based on the model of Hajli (2014) that assesses the relationships between the constructs of social media use, trust, perceived usefulness, and intention to buy in social media sites. To this end, a confirmatory factor analysis was carried out to evaluate whether the Hajli model applied to this sample, as well as multigroup CFA to measure invariance across gender and across following influencers or not. Finally, the path analysis evaluates the intersection of social media addiction with the Hajli model (2014). The results confirmed the Hajli model as well as the inclusion in the model of social media addiction as a variable that contributes to purchase intention on social media. Configural, metric, and scalar invariance were found across genders and across the following influencers or not. Also, the values found for internal consistency and composite reliability, convergent reliability, and discriminant reliability were within the reference values.
]]>Digital doi: 10.3390/digital4010007
Authors: Rahaf Orabi
This article relies on a combination of digital and analog data to analyze the 2D urban development of al-ʿAqaba and Jallūm districts in the Old City of Aleppo. The dataset consists of vectorized historical maps of the city spanning various historical periods. The oldest map in the collection dates back to the 1900s. Additionally, there are high-resolution orthomosaics created from a 3D model obtained through Terrestrial Laser Scanning (TLS) and Aerial Photogrammetry techniques. Through the analysis and integration of these various data types, the article proposes an analog-digital workflow that tracks the alterations in the urban fabric of the designated study area. The analysis primarily examines the alterations in the city’s two-dimensional layout and the distribution of mass and void. Tracking the changes in the street network of the studied area is the main goal of this research, along with recognizing the spatial changes in the built environment. The article identified changes in both the open spaces and the street layout.
]]>Digital doi: 10.3390/digital4010006
Authors: Yannis Skarpelos Sophia Messini Elina Roinioti Kostas Karpouzis Stavros Kaperonis Michaela-Gavriela Marazoti
While most published research on COVID-19 focused on a few countries and especially on the second wave of the pandemic and the vaccination period, we turn to the first wave (March–May 2020) to examine the sentiments and emotions expressed by Twitter users in Greece. Using deep-learning techniques, the analysis reveals a complex interplay of surprise, anger, fear, and sadness. Initially, surprise was dominant, reflecting the shock and uncertainty accompanying the sudden onset of the pandemic. Anger replaced surprise as individuals struggled with isolation and social distancing. Despite these challenges, positive sentiments of hope, resilience and solidarity were also expressed. The COVID-19 pandemic had a strong imprint upon the emotional landscape worldwide and in Greece. This calls for appealing to emotions as well as to reason when crafting effective public health strategies.
]]>Digital doi: 10.3390/digital4010005
Authors: Carlos Eduardo Andino Coello Mohammed Nazeh Alimam Rand Kouatly
This study explores the effectiveness and efficiency of the popular OpenAI model ChatGPT, powered by GPT-3.5 and GPT-4, in programming tasks to understand its impact on programming and potentially software development. To measure the performance of these models, a quantitative approach was employed using the Mostly Basic Python Problems (MBPP) dataset. In addition to the direct assessment of GPT-3.5 and GPT-4, a comparative analysis involving other popular large language models in the AI landscape, notably Google’s Bard and Anthropic’s Claude, was conducted to measure and compare their proficiency in the same tasks. The results highlight the strengths of ChatGPT models in programming tasks, offering valuable insights for the AI community, specifically for developers and researchers. As the popularity of artificial intelligence increases, this study serves as an early look into the field of AI-assisted programming.
]]>Digital doi: 10.3390/digital4010004
Authors: Zhenkai Chen Wenjing Zhou Yingjie Yu Vivi Tornari Gilberto Artioli
In this paper, based on Gaussian 1σ-criterion and histogram segmentation, a weighted least-squares algorithm is applied and validated on digital holographic speckle pattern interferometric data to perform phase separation on the complex interference fields. The direct structural diagnosis tool is used to investigate defects and their impact on a complex antique wall painting of Giotto. The interferometry data is acquired with a portable off-axis interferometer set-up with a phase-shifted reference beam coupled with the object beam in front of the digital photosensitive medium. A digital holographic speckle pattern interferometry (DHSPI) system is used to register digital recordings of interferogram sequences over time. The surface is monitored for as long as it deforms prior to returning to its initial reference equilibrium state prior to excitation. The attempt to separate the whole vs. local defect complex amplitudes from the interferometric data is presented. The main aim is to achieve isolation and visualization of each defect’s impact amplitude in order to obtain detailed documentation of each defect and its structural impact on the surface for structural diagnosis purposes.
]]>Digital doi: 10.3390/digital4010003
Authors: Shaina Raza
News recommender systems (NRS) are crucial for helping users navigate the vast amount of content available online. However, traditional NRS often suffer from biases that lead to a narrow and unfair distribution of exposure across news items. In this paper, we propose a novel approach, the Contextual-Dual Bias Reduction Recommendation System (C-DBRRS), which leverages Long Short-Term Memory (LSTM) networks optimized with a multi-objective function to balance accuracy and diversity. We conducted experiments on two real-world news recommendation datasets and the results indicate that our approach outperforms the baseline methods, and achieves higher accuracy while promoting a fair and balanced distribution of recommendations. This work contributes to the development of a fair and responsible recommendation system.
]]>Digital doi: 10.3390/digital4010002
Authors: Victor García
This paper presents a study guide and an analysis of its use in the computer programming learning process of an 8-year-old elementary school student through the Scratch program. The research’s objective is to explore and understand how this individual student approaches learning programming skills and tackles challenges within the Scratch environment. An individual case study approach was adopted at home, combining qualitative and quantitative methods to gain a comprehensive insight into the student’s learning process. The study was conducted without grant support, and the researcher actively participated as an educator and observer in the student’s learning sessions. Performance was assessed, and a semi-structured interview was conducted to inquire about the student’s experiences, motivations, and interests regarding programming in Scratch, as well as their feelings after the training. Additionally, the student’s activities during programming sessions were meticulously recorded, and projects created in Scratch were analyzed to assess progress and understanding of concepts. The findings of this research have the potential to contribute to the field of programming education and provide valuable insights into how young elementary school-aged individuals can acquire computer and programming skills in an interactive environment such as Scratch. The results obtained demonstrate that using the proposed guide to introduce elementary school students to programming at home, with parents acting as educators, is feasible. Therefore, it helps facilitate access to this knowledge, which is currently limited for many individuals in an official educational setting.
]]>Digital doi: 10.3390/digital4010001
Authors: Sunzida Siddique Mohd Ariful Haque Roy George Kishor Datta Gupta Debashis Gupta Md Jobair Hossain Faruk
Machine learning (ML) has become increasingly prevalent in various domains. However, ML algorithms sometimes give unfair outcomes and discrimination against certain groups. Thereby, bias occurs when our results produce a decision that is systematically incorrect. At various phases of the ML pipeline, such as data collection, pre-processing, model selection, and evaluation, these biases appear. Bias reduction methods for ML have been suggested using a variety of techniques. By changing the data or the model itself, adding more fairness constraints, or both, these methods try to lessen bias. The best technique relies on the particular context and application because each technique has advantages and disadvantages. Therefore, in this paper, we present a comprehensive survey of bias mitigation techniques in machine learning (ML) with a focus on in-depth exploration of methods, including adversarial training. We examine the diverse types of bias that can afflict ML systems, elucidate current research trends, and address future challenges. Our discussion encompasses a detailed analysis of pre-processing, in-processing, and post-processing methods, including their respective pros and cons. Moreover, we go beyond qualitative assessments by quantifying the strategies for bias reduction and providing empirical evidence and performance metrics. This paper serves as an invaluable resource for researchers, practitioners, and policymakers seeking to navigate the intricate landscape of bias in ML, offering both a profound understanding of the issue and actionable insights for responsible and effective bias mitigation.
]]>Digital doi: 10.3390/digital3040020
Authors: Jon Dron
This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.
]]>Digital doi: 10.3390/digital3040019
Authors: Ferdous Sharifi Ali Rasaii Amirmohammad Pasdar Shaahin Hessabi Young Choon Lee
The emergence of fog computing has significantly enhanced real-time data processing by bringing computation resources closer to data sources. This adoption is very beneficial in the healthcare sector, where abundant time-sensitive processing tasks exist. Although such adoption is very promising, there is a challenge with the limited computational capacity of fog nodes. This challenge becomes even more critical when mobile IoT nodes enter the network, potentially increasing the network load. To address this challenge, this paper presents a framework that leverages a Many-to-One offloading (M2One) policy designed for modelling the dynamic nature and time-critical aspect of processing tasks in the healthcare domain. The framework benefits the multi-tier structure of the fog layer, making efficient use of the computing capacity of mobile fog nodes to enhance the overall computing capability of the fog network. Moreover, this framework accounts for mobile IoT nodes that generate an unpredictable volume of tasks at unpredictable intervals. Under the proposed policy, a first-tier fog node, called the coordinator fog node, efficiently manages all requests offloaded by the IoT nodes and allocates them to the fog nodes. It considers factors like the limited energy in the mobile nodes, the communication channel status, and low-latency demands to distribute requests among fog nodes and meet the stringent latency requirements of healthcare applications. Through extensive simulations in a healthcare scenario, the policy’s effectiveness showed an improvement of approximately 30% in average delay compared to cloud computing and a significant reduction in network usage.
]]>Digital doi: 10.3390/digital3040018
Authors: Alex Zarifis Shixuan Fu
Mobile apps utilize the features of a mobile device to offer an ever-growing range of functionalities. This vast choice of functionalities is usually available for a small fee or for free. These apps access the user’s personal data, utilizing both the sensors on the device and big data from several sources. Nowadays, Artificial Intelligence (AI) is enhancing the ability to utilize more data and gain deeper insight. This increase in the access and utilization of personal information offers benefits but also challenges to trust. Using questionnaire data from Germany, this research explores the role of trust from the consumer’s perspective when purchasing mobile apps with enhanced AI. Models of trust from e-commerce are adapted to this specific context. A model is proposed and explored with quantitative methods. Structural Equation Modeling enables the relatively complex model to be tested and supported. Propensity to trust, institution-based trust, perceived sensitivity of personal information, and trust in the mobile app are found to impact the intention to use the mobile app with enhanced AI.
]]>Digital doi: 10.3390/digital3030017
Authors: Ali Alqahtani Sumayya Azzony Leen Alsharafi Maha Alaseri
In this article, we introduce a web-based malware detection system that leverages a deep-learning approach. Our primary objective is the development of a robust deep-learning model designed for classifying malware in executable files. In contrast to conventional malware detection systems, our approach relies on static detection techniques to unveil the true nature of files as either malicious or benign. Our method makes use of a one-dimensional convolutional neural network 1D-CNN due to the nature of the portable executable file. Significantly, static analysis aligns perfectly with our objectives, allowing us to uncover static features within the portable executable header. This choice holds particular significance given the potential risks associated with dynamic detection, often necessitating the setup of controlled environments, such as virtual machines, to mitigate dangers. Moreover, we seamlessly integrate this effective deep-learning method into a web-based system, rendering it accessible and user-friendly via a web interface. Empirical evidence showcases the efficiency of our proposed methods, as demonstrated in extensive comparisons with state-of-the-art models across three diverse datasets. Our results undeniably affirm the superiority of our approach, delivering a practical, dependable, and rapid mechanism for identifying malware within executable files.
]]>Digital doi: 10.3390/digital3030016
Authors: Pamela Cowan Rachel Farrell
This small-scale study comprised an evaluation of a teacher professional learning experience that involved the collaborative creation of resources using immersive virtual reality (VR) as a retrieval practice tool, specifically focusing on the open access aspects of the SchooVR platform. SchooVR offers teachers and students tools to enhance teaching and learning by providing a range of virtual field trips and the ability to create customised virtual tours aligned with curriculum requirements. By leveraging the immersive 360° learning environment, learners can interact with content in meaningful ways, fostering engagement and deepening understanding. This study draws on the experiences of a group of postgraduate teacher education students and co-operating teachers in Ireland and Northern Ireland who collaborated on the creation of a number of immersive learning experiences across a range of subjects during a professional learning event. The research showcases how immersive realities, such as VR, can be integrated effectively into blended learning spaces to create resources that facilitate retrieval practice and self-paced study, thereby supporting the learning process. By embedding VR experiences into the curriculum, students are given opportunities for independent practice, review, and personalised learning tasks, all of which contribute to the consolidation of knowledge and the development of metacognitive skills. The findings suggest that SchooVR and similar immersive technologies have the potential to enhance educational experiences and promote effective learning outcomes across a variety of subject areas.
]]>Digital doi: 10.3390/digital3030015
Authors: Marcus Birkenkrahe
This paper presents a case study on using Emacs and Org-mode for literate programming in undergraduate computer and data science courses. Over three academic terms, the author mandated these tools across courses in R, Python, C++, SQL, and more. The onboarding relied on simplified Emacs tutorials and starter configurations. Students gained proficiency after undertaking initial practice. Live coding sessions demonstrated the flexible instruction enabled by literate notebooks. Assignments and projects required documentation alongside functional code. Student feedback showed enthusiasm for learning a versatile IDE, despite some frustration with the learning curve. Skilled students highlighted efficiency gains in a unified environment. However, the uneven adoption of documentation practices pointed to a need for better incorporation into grading. Additionally, some students found Emacs unintuitive, desiring more accessible options. This highlights a need to match tools to skill levels, potentially starting novices with graphical IDEs before introducing Emacs. The key takeaways are as follows: literate programming aids comprehension but requires rigorous onboarding and reinforcement, and Emacs excels for advanced workflows but has a steep initial curve. With proper support, these tools show promise for data science education.
]]>Digital doi: 10.3390/digital3030014
Authors: Nisha Rawindaran Liqaa Nawaf Suaad Alarifi Daniyal Alghazzawi Fiona Carroll Iyad Katib Chaminda Hewage
The emergence of Industry 5.0 has revolutionized technology by integrating physical systems with digital networks. These advancements have also led to an increase in cyber threats, posing significant risks, particularly for small and medium-sized enterprises (SMEs). This research investigates the resistance of SMEs in Saudi Arabia and the United Kingdom (UK) to cyber security measures within the context of Industry 5.0, with a specific focus on governance and policy. It explores the cultural and economic factors contributing to this resistance, such as limited awareness of cyber security risks, financial constraints, and competing business priorities. Additionally, the study examines the role of government policies and regulations in promoting cyber security practices among SMEs and compares the approaches adopted by Saudi Arabia and the UK. By employing a mixed methods analysis, including interviews with SME owners and experts, the research highlights challenges and opportunities for improving cyber security governance and policy in both countries. The findings emphasize the need for tailored solutions due to the differing cultural and economic contexts between Saudi Arabia and the UK. Specifically, the study delves into the awareness and implementation of cyber security measures, focusing on SMEs in Saudi Arabia and their adherence to the Essential Cyber Security Controls (ECC-1:2018) guidelines. Furthermore, it examines the existing cyber security awareness practices and compliance in the UK, while also comparing official guidance documents aimed at supporting SMEs in achieving better cyber security practices. Based on the analysis, greater engagement with these documents is recommended in both countries to foster awareness, confidence, and compliance among SMEs, ultimately enhancing their cyber security posture. This paper offers a comparative research study on governance and policy between Saudi Arabia and the UK, presenting a set of recommendations to strengthen cyber security awareness and education, fortify regulatory frameworks, and foster public–private partnerships to combat cyber security threats in the Industry 5.0 landscape.
]]>Digital doi: 10.3390/digital3030013
Authors: Alexandr Semyonov Elena Bogdan Elena Shamal Aelita Sargsyan Karapet Davtyan Natasha Azzopardi-Muscat David Novillo-Ortiz
This paper examines the status of the development of national digital health information systems (HIS) in Commonwealth of Independent States (CIS) member states. Data for research were collected using a questionnaire adapted from the questionnaire of the WHO’s Third Global Survey on eHealth. The results showed that the digital transformation of HIS was occurring in all seven CIS member states (participating in the study), which were financed by different resources. Laws and regulations on electronic medical records (EMR) use were present in almost all participating CIS member states. Various international standards and classifications were used to support development and the interoperability of digital health information system (d-HIS), including International Classification of Diseases (ICD), Digital Imaging and Communications in Medicine (DICOM), ISO 18308, Logical Observation Identifiers, Names, and Codes (LOINC), Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and ISO TC 215. Several CIS member states had adopted a national information security strategy for the safe processing of both personal data and medical confidential information. The digital transformation of healthcare and the Empowerment through Digital Health initiative are taking place in all CIS member states, which are at different stages of introducing electronic medical and health records.
]]>Digital doi: 10.3390/digital3030012
Authors: Dimitrios A. Koutsomitropoulos Ioanna C. Gogou
Convolutional Neural Networks (CNNs) are well-studied and commonly used for the problem of object detection thanks to their increased accuracy. However, high accuracy on its own says little about the effective performance of CNN-based models, especially when real-time detection tasks are involved. To the best of our knowledge, there has not been sufficient evaluation of the available methods in terms of their speed/accuracy trade-off. This work performs a review and hands-on evaluation of the most fundamental object detection models on the Common Objects in Context (COCO) dataset with respect to this trade-off, their memory footprint, and computational and storage costs. In addition, we review available datasets for medical mask detection and train YOLOv5 on the Properly Wearing Masked Faces Dataset (PWMFD). Next, we test and evaluate a set of specific optimization techniques, transfer learning, data augmentations, and attention mechanisms, and we report on their effect for real-time mask detection. Based on our findings, we propose an optimized model based on YOLOv5s using transfer learning for the detection of correctly and incorrectly worn medical masks that surpassed more than two times in speed (69 frames per second) the state-of-the-art model SE-YOLOv3 on the PWMFD while maintaining the same level of mean Average Precision (67%).
]]>Digital doi: 10.3390/digital3030011
Authors: Michael Max Bühler Igor Calzada Isabel Cane Thorsten Jelinek Astha Kapoor Morshed Mannan Sameer Mehta Vijay Mookerje Konrad Nübel Alex Pentland Trebor Scholz Divya Siddarth Julian Tait Bapu Vaitla Jianguo Zhu
Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship, new skills, and jobs, especially in small communities and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being, and protect digital rights, we propose data cooperatives as a vehicle for secure, trusted, and sovereign data exchange. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted application programming interfaces (“APIs”) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This review article analyses an array of transformative use cases that underline the potential of cooperative data governance. These case studies exemplify how data and platform cooperatives, through their innovative value creation mechanisms, can elevate digital commons and value chains to a new dimension of collaboration, thereby addressing pressing societal issues. Guided by our research aim, we propose a policy framework that supports the practical implementation of digital federation platforms and data cooperatives. This policy blueprint intends to facilitate sustainable development in both the Global South and North, fostering equitable and inclusive data governance strategies.
]]>Digital doi: 10.3390/digital3020010
Authors: Emanuela Marasco Karl Ricanek Huy Le
AI-empowered sweat metabolite analysis is an emerging and open research area with great potential to add a third category to biometrics: chemical. Current biometrics use two types of information to identify humans: physical (e.g., face, eyes) and behavioral (i.e., gait, typing). Sweat offers a promising solution for enriching human identity with more discerning characteristics to overcome the limitations of current technologies (e.g., demographic differential and vulnerability to spoof attacks). The analysis of a biometric trait’s chemical properties holds potential for providing a meticulous perspective on an individual. This not only changes the taxonomy for biometrics, but also lays a foundation for more accurate and secure next-generation biometric systems. This paper discusses existing evidence about the potential held by sweat components in representing the identity of a person. We also highlight emerging methodologies and applications pertaining to sweat analysis and guide the scientific community towards transformative future research directions to design AI-empowered systems of the next generation.
]]>Digital doi: 10.3390/digital3020009
Authors: Vassilios Krassanakis Loukas-Moysis Misthos
This article aims to present the authors’ perspective regarding the challenges and opportunities of mouse-tracking methodology while performing experimental research, particularly related to the map-reading process. We briefly describe existing metrics, visualization techniques and software tools utilized for the qualitative and quantitative analysis of experimental mouse-movement data towards the examination of both perceptual and cognitive issues. Moreover, we concisely report indicative examples of mouse-tracking studies in the field of cartography. The article concludes with summarizing mouse-tracking strengths/potential and limitations, compared to eye tracking. In a nutshell, mouse tracking is a straightforward method, particularly suitable for tracking real-life behaviors in interactive maps, providing the valuable opportunity for remote experimentation; even though it is not suitable for tracking the actual free-viewing behavior, it can be concurrently utilized with other state-of-the-art experimental methods.
]]>Digital doi: 10.3390/digital3020008
Authors: Rawan Masoud Sarah Basahel
Digital transformation (DT) has attracted the attention of management and organizational scholars in the past decade. In addition, firms are increasingly interested in using DT to obtain a competitive advantage. Nevertheless, studies on DT outcomes remain scarce. Therefore, this study empirically investigated the effect of digital transformation on firm performance by classifying the capabilities required to realize digital transformation, customer experience, and IT innovation. A structured questionnaire was used to collect data from 164 representatives of service sector firms in Saudi Arabia, namely chief information officers, chief transformation officers, and IT managers. Based on the findings of this study, it is evident that digital transformation, customer experience, and IT innovation positively impact a firm’s performance, with customer experience exhibiting the strongest effect.
]]>Digital doi: 10.3390/digital3010007
Authors: Joelie Mandzufas Jeremiah Ayalde Daniel Ta Emily Munro Rigel Paciente Emmanuel Philip Pranoto Kaelyn King Kelly How Alanna Sincovich Mary Brushe Nicole Wickens Gabriella Wells Alix Woolard Melinda Edmunds Hannah Thomas Gina S. A. Trapp Karen Lombardi
The social media application TikTok allows users to view and upload short-form videos. Recent evidence suggests it has significant potential for both industry and health promoters to influence public health behaviours. This protocol describes a standardised, replicable process for investigations that can be tailored to various areas of research interest, allowing comparison of content and features across public health topics. The first 50 appearing videos in each of five relevant hashtags are sampled for analysis. Utilising a codebook with detailed definitions, engagement metadata and content variables applicable to any content area is captured, including an assessment of the video’s overall sentiment (positive, negative, neutral). Additional specific coding variables can be developed to provide targeted information about videos posted within selected hashtags. A descriptive, cross-sectional content analysis is applied to the generic and specific data collected for a research topic area. This flexible protocol can be replicated for any health-related topic and may have a wider application on other platforms or to assess changes in content and sentiment over time. This protocol was developed by a collaborative team of child health and development researchers for application to a series of topics. Findings will be used to inform health promotion messaging and counter-advertising.
]]>Digital doi: 10.3390/digital3010006
Authors: Thomas Krabokoukis
This study offers a comprehensive examination of the literature surrounding technology and tools in the hospitality industry. A bibliometric analysis was performed on 709 Scopus-indexed publications from 2000 to January 2023, with a focus on identifying key players, institutions, research trends, and the co-occurrence of keywords. The results shed light on the scientific landscape of technology and tools in the hospitality sector, emphasizing the significance of big data and the customer experience in the sharing economy. The study also presents the architecture of new software that offers guests the ability to customize their hotel stay, classified as part of the first cluster in the co-occurrence of keywords analysis. This approach highlights the growing importance of big data and customer experience and makes a valuable contribution to the field by offering a tool for hotel booking customization. Furthermore, the study underscores the importance of collaboration between academic institutions and private companies in providing a mutually beneficial platform that exceeds the expectations of both hotels and guests.
]]>Digital doi: 10.3390/digital3010005
Authors: Dejan Grba
Heralded by promises for the long-awaited economic empowerment of digital art and the paradigmatic shift of creative production, the art market’s fusion with blockchain technologies and the crypto economy has polarized opinions among artists, cultural workers, and economists. Its capricious dynamics and exuberance largely shroud the continuation of the art market’s ideology and the reinforcement of the disturbing political vectors of the crypto/blockchain complex. In this paper, I address several interrelated aspects of art tokenization in a compact and comprehensive critical framework that may be useful for a constructive discourse of contemporary digital art. By focusing on the core poetic principles of artmaking—which concern the historically informed autonomy of expression and socially responsible freedom of creative thinking—I identify some of the prospects for advancing digital art towards an ethically coherent and epistemologically relevant expressive stratum. The opening sections Introduction, Markets, and Contrivances outline the art market, its adoption of crypto technologies, and its influences on the production and expressive modes of digital art. Sections Ideologies and Myths describe the ideological and technical issues of the crypto economy, while Shams and Fallouts delve into the conceptual shortcomings and ethical, political, and creative consequences of the standard art tokenization rhetoric. The closing sections Options and Conclusion present the considerations for a productive assessment of blockchain technologies in digital art and summarize some of the alternative approaches for navigating and interfacing with the crypto art world.
]]>Digital doi: 10.3390/digital3010004
Authors: Kowshik Bhowmik Anca Ralescu
Suboptimal performance of cross-lingual word embeddings for distant and low-resource languages calls into question the isomorphic assumption integral to the mapping-based methods of obtaining such embeddings. This paper investigates the comparative impact of typological relationship and corpus size on the isomorphism between monolingual embedding spaces. To that end, two clustering algorithms were applied to three sets of pairwise degrees of isomorphisms. It is also the goal of the paper to determine the combination of the isomorphism measure and clustering algorithm that best captures the typological relationship among the chosen set of languages. Of the three measures investigated, Relational Similarity seemed to capture best the typological information of the languages encoded in their respective embedding spaces. These language clusters can help us identify, without any pre-existing knowledge about the real-world linguistic relationships shared among a group of languages, the related higher-resource languages of low-resource languages. The presence of such languages in the cross-lingual embedding space can help improve the performance of low-resource languages in a cross-lingual embedding space.
]]>Digital doi: 10.3390/digital3010003
Authors: Digital Editorial Office Digital Editorial Office
High-quality academic publishing is built on rigorous peer review [...]
]]>Digital doi: 10.3390/digital3010002
Authors: Alexandros Kleftodimos Maria Moustaka Athanasios Evagelou
Today, augmented reality (AR) applications are being used in many fields, such as advertising, entertainment, tourism, and education. Location-based augmented reality (AR) is a technology where interactive digital content is associated with real-world locations and their geo-based markers. The merging of the real-world environment with the digital content occurs when the user reaches these locations. Location-based AR applications are typically experienced using mobile devices with the ability to report the user’s location via GPS. These applications are increasingly used in education, since it has been shown that they positively affect student satisfaction and engagement. Furthermore, it is known in the literature that learner satisfaction and engagement increase when gamification and storytelling techniques are incorporated into the educational process. The aim of the study is to present two location-based, educational augmented-reality applications that utilize gamification and storytelling to provide cultural heritage knowledge about a prehistoric lake settlement. The study also aims to provide ideas and guidance to educators who wish to create applications that transform educational visits to archeological sites and museums into engaging augmented-reality experiences. Both applications underwent a preliminary evaluation using a sample of 71 higher-education students and a sample of 58 school students. The findings showed that the applications scored well in aspects such as ease of use, student satisfaction, and perceived educational usefulness.
]]>Digital doi: 10.3390/digital3010001
Authors: Marco Brambilla Hoda Badrizadeh Narges Malek Mohammadi Alireza Javadian Sabet
The rapid proliferation of social media has been redefining every facet of the old marketing and customer engagement tactics, not only for low-end and mass-market products but also for luxury brands. In this context, brands are dealing with the challenge of maintaining a balance between using mass marketing strategies concurrent with accentuating the exclusivity of their offerings. Social media can be considered beneficial if brands employ it to reach the right audience and use the right platform and incorporating the right content. In this work, we propose a sector-specific, integrated, and holistic investigation of the social media strategies of luxury brands together with the impact they generate in terms of the engagement level of the users as an indicator of their success. We provide empirical validation of the methods used in the Italian market of the luxury fashion sector, providing a qualitative and quantitative analysis of the content shared on social media, considering the type, timing, and modality of the sharing. We evaluate consumer-brand engagement in different contexts, including important live events in the field.
]]>Digital doi: 10.3390/digital2040030
Authors: Veronika Arefeva Roman Egger
In recent years, Natural Language Processing (NLP) has become increasingly important for extracting new insights from unstructured text data, and pre-trained language models now have the ability to perform state-of-the-art tasks like topic modeling, text classification, or sentiment analysis. Currently, BERT is the most widespread and widely used model, but it has been shown that a potential to optimize BERT can be applied to domain-specific contexts. While a number of BERT models that improve downstream tasks’ performance for other domains already exist, an optimized BERT model for tourism has yet to be revealed. This study thus aimed to develop and evaluate TourBERT, a pre-trained BERT model for the tourism industry. It was trained from scratch and outperforms BERT-Base in all tourism-specific evaluations. Therefore, this study makes an essential contribution to the growing importance of NLP in tourism by providing an open-source BERT model adapted to tourism requirements and particularities.
]]>Digital doi: 10.3390/digital2040029
Authors: Markos Katsianis Tuna Kalayci Apostolos Sarris
The emergence of the ubiquitous digital ecosystem has provided new momentum for research in archaeology and the cultural heritage domain [...]
]]>Digital doi: 10.3390/digital2040028
Authors: Najla Fattouch Imen Ben Lahmar Mouna Rekik Khouloud Boukadi
In the context of Industry 4.0, IoRT-aware BPs represent an attractive paradigm that aims to automate the classic business process (BP) using the internet of robotics things (IoRT). Nonetheless, the execution of these processes within the enterprises may be costly due to the consumed resources, recruitment cost, etc. To bridge these gaps, the business process outsourcing (BPO) strategy can be applied to outsource partially or totally a process to external service suppliers. Despite the various advantages of BPO, it is not a trivial task for enterprises to determine which part of the process should be outsourced and which environment would be selected to deploy it. This paper deals with the decision-making outsourcing of an IoRT-aware BP to the fog and/or cloud environments. The fog environment includes devices at the edge of the network which will ensure the latency requirements of some latency-sensitive applications. However, relying on cloud, the availability and computational requirements of applications can be met. Toward these objectives, we realized an in-depth analysis of the enterprise requirements, where we identified a set of relevant criteria that may impact the outsourcing decision. Then, we applied the method based on the removal effects of criteria (MEREC) to automatically generate the weights of the identified criteria. Using these weights, we performed the selection of the suitable execution environment by using the ELECTRE IS method. As an approach evaluation, we sought help from an expert to estimate the precision, recall, and F-score of our approach. The obtained results show that our approach is the most similar to the expert result, and it has acceptable values.
]]>Digital doi: 10.3390/digital2040027
Authors: Ietezaz Ul Hassan Raja Hashim Ali Zain Ul Abideen Talha Ali Khan Rand Kouatly
It is hard to trust any data entry on online websites as some websites may be malicious, and gather data for illegal or unintended use. For example, bank login and credit card information can be misused for financial theft. To make users aware of the digital safety of websites, we have tried to identify and learn the pattern on a dataset consisting of features of malicious and benign websites. We treated the problem of differentiation between malicious and benign websites as a classification problem and applied several machine learning techniques, for example, random forest, decision tree, logistic regression, and support vector machines to this data. Several evaluation metrics such as accuracy, precision, recall, F1 score, and false positive rate, were used to evaluate the performance of each classification technique. Since the dataset was imbalanced, the machine learning models developed a bias during training toward a specific class of websites. Multiple data balancing techniques, for example, undersampling, oversampling, and SMOTE, were applied for balancing the dataset and removing the bias. Our experiments showed that after balancing the data, the random forest algorithm using the oversampling technique showed the best results in all evaluation metrics for the benign and malicious website feature dataset.
]]>Digital doi: 10.3390/digital2040026
Authors: Katerina Kabassi Stelios Bekatoros Athanasios Botonis
The need to evaluate museum websites is an issue that has been highlighted by several researchers. In this paper, we focus on museums’ website evaluation and use as a case study the evaluation of natural history museums’ websites. For this evaluation experiment, MCDM methods are combined and compared. The focus of this paper is twofold: (1) checking the consistency of AHP for calculating the weights of criteria and (2) comparing Fuzzy TOPSIS and Fuzzy VIKOR with each other and with a usability evaluation questionnaire.
]]>Digital doi: 10.3390/digital2040025
Authors: Lukas Budde Christoph Benninghaus Roman Hänggi Thomas Friedli
The digital transformation is a complex and multi-faceted phenomenon, which companies hamper to manage effectively. One particular facet of this phenomenon is the role of managers, which is still underrepresented in research. This study aims to identify and explain why and what managerial practices and competencies are particularly needed to effectively govern through this transformation. We choose the case study methodology as the research design with eight manufacturing companies in Western Europe, where we applied within- and cross-case analyses. Specific barriers for digital transformation and four aggregated managerial practices, such as strategy/organization, collaboration, cross-functionality and data-driven use cases, were identified. These were supported by 13 competencies to facilitate digitalization. We explicate these practices based on the change management theory and provide a model describing the impact of these practices on profitability. This study contributes to the emergent change theory by analyzing practices and competencies that managers should be equipped with to foster digitalization.
]]>Digital doi: 10.3390/digital2040024
Authors: Utku Demirci Pinar Karagoz
Recommendation has become an inseparable component of many software applications, such as e-commerce, social media and gaming platforms. Particularly in collaborative filtering-based recommendation solutions, the preferences of other users are considered heavily. At this point, trust among the users comes into the scene as an important concept to improve the recommendation performance. Trust describes the nature and the strength of ties between individuals and hence provides useful information to improve the recommendation accuracy, particularly against data sparsity and cold start problems. The Trust notion helps alleviate the effect of these problems by providing additional reliable relationships between the users. However, trust information, specifically explicit trust, is not straightforward to collect and is only scarcely available. Therefore, implicit trust models have been proposed to fill in the gap. The literature includes a variety of studies proposing the use of trust for recommendation. In this work, two specific sub-problems are elaborated on: the relationship between explicit and implicit trust scores, and the construction of a machine learning model for explicit trust. For the first sub-problem, an implicit trust model is devised and the compatibility of implicit trust scores with explicit scores is analyzed. For the second sub-problem, two different explicit trust models are proposed: Explicit trust modeling through users’ rating behavior and explicit trust modeling as a link prediction problem. The performances of the prediction models are analyzed on a set of benchmark data sets. It is observed that explicit and implicit trust models have different natures, and are to be used in a complementary way for recommendation. Another important result is that the accuracy of the machine learning models for explicit trust is promising and depends on the availability of data.
]]>Digital doi: 10.3390/digital2040023
Authors: Anita Casarotto
In the Mediterranean, field survey has been the most widely used method to detect archaeological sites in arable fields since the 1970s. Through survey, data about the state of preservation of ancient settlements have been extensively mapped by archaeologists over large rural landscapes using paper media (e.g., topographical maps) or GPS and GIS technologies. These legacy data are unique and irreplaceable for heritage management in landscape planning, territorial monitoring of cultural resources, and spatial data analysis to study past settlement patterns in academic research (especially in landscape archaeology). However, legacy data are at risk due to often improper digital curation and the dramatic land transformation that is affecting several regions. To access this vast knowledge production and allow for its dissemination, this paper presents a method based on student internships in data digitisation to review, digitise, and integrate archaeological primary survey data. A pilot study for Central–Southern Italy and the Iberian Peninsula exemplifies how the method works in practice. It is concluded that there are clear benefits for cultural resource management, academic research, and the students themselves. This method can thus help us to achieve large-scale collection, digitisation, integration, accessibility, and reuse of field survey datasets, as well as compare survey data on a supranational scale.
]]>Digital doi: 10.3390/digital2030022
Authors: Kyriaki A. Tychola Ioannis Tsimperidis George A. Papakostas
The representation of the physical world is an issue that concerns the scientific community studying computer vision, more and more. Recently, research has focused on modern techniques and methods of photogrammetry and stereoscopy with the aim of reconstructing three-dimensional realistic models with high accuracy and metric information in a short time. In order to obtain data at a relatively low cost, various tools have been developed, such as depth cameras. RGB-D cameras are novel sensing systems that capture RGB images along with per-pixel depth information. This survey aims to describe RGB-D camera technology. We discuss the hardware and data acquisition process, in both static and dynamic environments. Depth map sensing techniques are described, focusing on their features, pros, cons, and limitations; emerging challenges and open issues to investigate are analyzed; and some countermeasures are described. In addition, the advantages, disadvantages, and limitations of RGB-D cameras in all aspects are also described critically. This survey will be useful for researchers who want to acquire, process, and analyze the data collected.
]]>Digital doi: 10.3390/digital2030021
Authors: Konstantinos I. Roumeliotis Nikolaos D. Tselikas Christos Tryfonopoulos
Currently, websites rely heavily on digital marketing, notably search engine optimization (SEO), for success. In the COVID-19 era, hotels have to employ every feasible means to stay afloat despite the bleak business and travel conditions. Many of them have already invested in digital marketing, especially SEO, by applying SEO techniques to their websites to attract more visitors and bookings. This research examines hotels’ websites regarding the SEO techniques they have applied and their impact on web traffic to their websites. During a one-year observation period (February 2021–February 2022), we collected and analyzed web data from 309 top-listed Greek hotels using our own-developed software. By creating and following a specific methodology, we came to valuable conclusions. In addition, we used fuzzy cognitive mapping to develop an exploratory model. From the descriptive analysis and technical SEO perspective, we have concluded that hotels websites’ traffic and, by extension, their long-term viability are inextricably intertwined. Existing and future SEO marketers may benefit from our research’s time-accurate insights on hotel SEO tactics.
]]>Digital doi: 10.3390/digital2030020
Authors: Markos Konstantakis Yannis Christodoulou Georgios Alexandridis Alexandros Teneketzis George Caridakis
The modern cultural industry and the related academic sectors have shown increased interest in Cultural User eXperience (CUX) research, since it constitutes a critical factor to examine and apply when presenting cultural content. Recent CUX studies show that visitors tend to carry their own cultural characteristics and preferences when visiting destinations of cultural interest, thus obtaining a virtually unique experience. To cope with this tendency, various research efforts have been made to identify different profiles of cultural visitors based on their background and preferences and classify them into distinct visitor types. In this paper, we proposed the ACUX (Augmented Cultural User eXperience) typology for classifying visitors of cultural destinations. The proposed typology aims to provide the multi-profile classification of cultural visitors based on their visiting preferences. Methodology-wise, the ACUX typology was the output of a harmonisation process of existing cultural-visitor typologies that base their classification on visiting preferences. The proposed typology was evaluated in juxtaposition with the harmonised typologies from which it was derived through an experiment conducted using a recommender and a dataset of TripAdvisor user responses. The evaluation showed that the ACUX typology achieved a more accurate profiling of cultural visitors, enabling them to reduce information overload by directly suggesting content that is more likely to meet their diverse preferences and needs.
]]>Digital doi: 10.3390/digital2030019
Authors: Sophie C. Schmidt Florian Thiery Martina Trognitz
In this paper, we introduce Linked Open Data (LOD) in the archaeological domain as a means to connect dispersed data sources and enable cross-querying. The technology behind the design principles and how LOD can be created and published is described to enable less-familiar researchers to understand the presented benefits and drawbacks of LOD. Wikidata is introduced as an open knowledge hub for the creation and dissemination of LOD. Different actors within archaeology have implemented LOD, and we present which challenges have been and are being addressed. A selection of projects showcases how Wikidata is being used by archaeologists to enrich and open their databases to the general public. With this paper, we aim to encourage the creation and re-use of LOD in archaeology, as we believe it offers an improvement on current data publishing practices.
]]>Digital doi: 10.3390/digital2020018
Authors: Anita D. Bhappu Tea Lempiälä M. Lisa Yeo
Ridesharing platforms have gained a strong foothold as an alternative transportation option to vehicle ownership for consumers while being contested for causing widespread market disruption. They continue to foster business model innovation and unveil new opportunities for delivering goods and services within the broader sharing economy. However, relatively little is known about the comparative value of services provided by the numerous ridesharing platforms available today. We, therefore, analyze three exemplars within the broader sharing economy: Uber®, BlaBlaCar®, and Zimride®. We find that these ridesharing platforms are unique service systems with different designs for facilitating peer-to-peer service interactions, which are reflected in their technology features, affordances, and constraints. Our analysis offers researchers and platform owners new ways to conceptualize and understand these two-sided, digital markets with a range of participants, user goals, and service experiences. In particular, we demonstrate that platforms can be designed to cultivate entrepreneur dependency or enable prosumer communication and collaborative consumption. Given pending legislation to regulate platform-based work, platform owners should be mindful about creating an asymmetrical power imbalance with providers given assumptions about service interactions and technology features. Furthermore, researchers should account for service design differences, as well as the technology affordances and constraints, of platforms.
]]>Digital doi: 10.3390/digital2020017
Authors: Zoi Stamati Manolis I. Stefanakis Georgia Kontogianni Andreas Georgopoulos
In recent years, the rapid development of technology has offered scientists new powerful tools. Especially in the field of cultural heritage documentation, modern digital media are an integral part, contributing significantly to the process of recording, managing, and displaying architectural monuments, archaeological sites, and art objects in a fast and accurate way. Digital technologies have made it possible to produce accurate digital copies of heritage sites and contribute to their salvation and conservation. At the top of the hill of Agios Fokas, acropolis of the ancient Demos of Kymissaleis, are the remains of a small Hellenistic temple of the 3rd–2nd century BC. This article proposes a virtual reconstruction of the temple on the acropolis of Kymissala. The geometric documentation of the temple and the creation of a three-dimensional model with its virtual reconstruction are analyzed. Modern photogrammetric methods are applied by taking digital images in the context of the experimental application of a relatively simple and semi-automatic method that does not require highly specialized knowledge and therefore can be used by non-specialists. With the use of modeling software, a three-dimensional model of the temple is created with the main goal of its virtual reconstruction.
]]>Digital doi: 10.3390/digital2020016
Authors: Jeremy Huggett
Archaeology operates in an increasingly data-mediated world in which data drive knowledge and actions about people and things. Famously, data has been characterized as “the new oil”, underpinning modern economies and at the root of many technological transformations in society at large, even assuming a near-religious power over thought and action. As the call for this Special Issue recognizes, archaeological research is socially and historically situated and consequently influenced by these same broader developments. In archaeology, as in the wider world, data is the foundation for knowledge, but its capacity is rarely reflected upon. This paper offers just such a reflection: a meditation on the nature of archaeological digital data and the challenges for its (re)use. It asks what we understand by data: its etymology and comprehension, its exceptionality and mutability, its constructs and infrastructures, and its origins and consequences. The concept of the archaeological data imaginary is introduced to better understand approaches to the collection and use of archaeological data, and a case study examines how knowledge is mediated and remediated through the data embedded in grey literature. Appreciating the volatility and unpredictability of digital data is key in understanding its potential for use and reuse in the creation of archaeological knowledge.
]]>Digital doi: 10.3390/digital2020015
Authors: Ian Dawson Andrew Meirion Jones Louisa Minkin Paul Reilly
Digital images are produced by humans and autonomous devices everywhere and, increasingly, ‘everywhen’. Legacy image data, like Mary Shelley’s infamous monster, can be stitched together as either smooth and eloquent, or jagged and abominable, supplementary combinations from various times to create a thought-provoking and/or repulsive Frankensteinian assemblage composed, like most archaeological assemblages, of messy temporal components combining, as Gavin Lucas sums it up, as “a mixture of things from different times and with different life histories but which co-exist here and now”. In this paper, we take a subversive Virtual Art/Archaeology approach, adopting Jacques Derrida’s notion of the ‘supplement’, to explore the temporality of archaeological legacy images, introducing the concept of timesheds or temporal brackets within aggregated images. The focus of this temporally blurred, and time-glitched, study is the World Heritage Site of the Neolithic to Common Era henge monument of Avebury, UK (United Kingdom).
]]>Digital doi: 10.3390/digital2020014
Authors: Sotirios Batsakis Marios Adamou Ilias Tachmazidis Sarah Jones Sofya Titarenko Grigoris Antoniou Thanasis Kehagias
Adult referrals to specialist autism spectrum disorder diagnostic services have increased in recent years, placing strain on existing services and illustrating the need for the development of a reliable screening tool, in order to identify and prioritize patients most likely to receive an ASD diagnosis. In this work a detailed overview of existing approaches is presented and a data driven analysis using machine learning is applied on a dataset of adult autism cases consisting of 192 cases. Our results show initial promise, achieving total positive rate (i.e., correctly classified instances to all instances ratio) up to 88.5%, but also point to limitations of currently available data, opening up avenues for further research. The main direction of this research is the development of a novel autism screening tool for adults (ASTA) also introduced in this work and preliminary results indicate the ASTA is suitable for use as a screening tool for adult populations in clinical settings.
]]>Digital doi: 10.3390/digital2020013
Authors: Loretta Bortey David J. Edwards Chris Roberts Iain Rillie
This study conducts a systematic review of safety risk models and theories by summarizing and comparing them to identify the best strategies that can be adopted in a digital ‘conceptual’ safety risk model for highway workers’ safety. A mixed philosophical paradigm was adopted (that used both interpretivism and post-positivism couched within inductive reasoning) for a systematic review and comparative analysis of existing risk models and theories. The underlying research question formulated was: can existing models and theories of safety risk be used to develop this proposed digital risk model? In total, 607 papers (where each constituted a unit of analysis and secondary data source) were retrieved from Scopus and analysed through colour coding, classification and scientometric analysis using VOSViewer and Microsoft Excel software. The reviewed models were built on earlier safety risk models with minor upgrades. However, human elements (human errors, human risky behaviour and untrained staff) remained a constant characteristic, which contributed to safety risk occurrences in current and future trends of safety risk. Therefore, more proactive indicators such as risk perception, safety climate, and safety culture have been included in contemporary safety risk models and theories to address the human contribution to safety risk events. Highway construction safety risk literature is scant, and consequently, comprehensive risk prevention models have not been well examined in this area. Premised upon a rich synthesis of secondary data, a conceptual model was recommended, which proposes infusing machine learning predictive models (augmented with inherent resilient capabilities) to enable models to adapt and recover in an event of inevitable predicted risk incident (referred to as the resilient predictive model). This paper presents a novel resilient predictive safety risk conceptual model that employs machine learning algorithms to enhance the prevention of safety risk in the highway construction industry. Such a digital model contains adaptability and recovery mechanisms to adjust and bounce back when predicted safety risks are unavoidable. This will help prevent unfortunate events in time and control the impact of predicted safety risks that cannot be prevented.
]]>Digital doi: 10.3390/digital2020012
Authors: Elina Roinioti Eleana Pandia Markos Konstantakis Yannis Skarpelos
In this paper, we discuss the gamification strategies and methodologies used by TRIPMENTOR—a game-oriented cultural tourism application in the region of Attica. Its primary purpose is to provide visitors with rich media content via the web and mobile environments by redirecting travellers, highlighting points of interest, and providing information for tour operators. Gamification is a critical component of the project; it relates users to specific sites and activities, improves their visiting experiences, and encourages a constant interaction with the application through a playful experience. In TRIPMENTOR, gamification serves both as a tourism marketing strategy and as a tool for encouraging users to share their experiences while exploring Attica in a way designed to meet their personal needs, interests, and habits. This paper aims to describe and analyse the gamification mechanisms applied, following the Octalysis framework, and discuss the opportunities and challenges of gamification as a tourist marketing strategy.
]]>Digital doi: 10.3390/digital2020011
Authors: Khaled Takrouri Edward Causton Benjamin Simpson
Over the past decade, the use of AR has significantly increased over a wide range of applications. Although there are many good examples of AR technology being used in engineering, retail, and for entertainment, the technology has not been widely adopted for teaching in university engineering departments. It is generally accepted that the use of AR can complement the students’ learning experience by improving engagement and by helping to visualise complex engineering physics; however, several key challenges still have to be addressed to fully integrate the use of AR into a broader engineering curriculum. The presented paper reviews the uses of AR in engineering education, highlights the benefits of AR integration in engineering curriculums, as well as the barriers that are preventing its wider adoption.
]]>Digital doi: 10.3390/digital2020010
Authors: Mohammed A. Akl Dina E. Mansour Fengyuan Zheng WookJin Seong
The accuracy with which virtual articulators are able to simulate centric and eccentric movements when fabricating definitive restorations has not yet been proven to be on par with mechanical articulators which have been reliably used in restorative dentistry for decades. This may be an issue when working on complex restorative cases utilizing a digital workflow and could result in considerable chairside adjustment time and subsequent loss of occlusal anatomy and morphology. Interchanging between digital and analog workflows is a challenge as accurate cross-mounting is difficult due to the changes that occur as the digital and analog workflows progress. This technique article provides a method for the fabrication of simple digital mounting jigs that enable clinicians and laboratory technicians to mount printed digital wax-ups and working casts back onto a programmed mechanical articulator, opposing diagnostic casts that have originally been mounted by means of a facebow transfer. This allows for the positioning of printed digital wax-ups and working casts to be in the correct 3-dimensional spatial relationship on the mechanical articulator for any necessary occlusal adjustments of the digitally designed wax-ups and/or definitive restorations before they are moved chairside.
]]>Digital doi: 10.3390/digital2020009
Authors: Georgios Gkougkoudis Dimitrios Pissanidis Konstantinos Demertzis
In the never-ending search by Law Enforcement Agencies (LEAs) for ways to reduce crime more effectively, the prevention of criminal activity is always considered the ideal solution. Since the 1990s, Intelligence-led Policing (ILP) was implemented in some forms by many LEAs around the world for crime prevention. Along with ILP, LEAs nowadays more and more turn to various new surveillance technologies. As a result, there are numerous studies and reports introducing some compelling results from LEAs that have implemented ILP, offering robust data around how the future of policing could be. In this context, this paper explores the most recent literature, identifying where ILP stands today in Greece and to what extent it could be a viable, practical approach to crime prevention. In addition, it is researched to what degree new technologies have been adopted by the European Union and the Hellenic Police in their “battle” against crime. It is concluded that most technologies are at the research stage, and studies are underway in many areas.
]]>Digital doi: 10.3390/digital2020008
Authors: Juan José Reyes Salgado
Cephalometric analysis is an excellent instrument in clinical diagnosis, treatment, and recovery from surgery. Nowadays, efforts to develop computerized dental X-ray image Cephalometric analysis systems for clinical and education usages. Much commercial software is created, but its high cost is unaffordable for some undergraduate students or low-income medical institutions; sure, the best option is the use of open source software alternatives. The study aimed to design free software Cephalopoint that applies vector algebra to perform the accuracy and precision of Cephalometric analysis. Three tests were used to validate the measurements made: accuracy test, consisting of comparing three selected cases and reply 32 times using the manual and software technique measurement; time test, consisted in obtaining the average time used to complete manual and software techniques of the previous test; and statistical test, consisted of measuring and applying the statistical analysis of 42 random cases for each method only using the software technique. The results showed high repeatability and no significant difference between manual tracing and software techniques. All the variables calculated with the software technique exhibited a normal distribution. Cephalopoint is excellent software for accurate and precise Cephalometric measurements. Moreover, it significantly decreased the measurement time compared with the manual.
]]>Digital doi: 10.3390/digital2010007
Authors: Orouba Almilaji Vegard Engen Jonathon Snook Sharon Docherty
To facilitate the clinical use of an algorithm for predicting the risk of gastrointestinal malignancy in iron deficiency anaemia—the IDIOM score, a software application has been developed, with a view to providing free and simple access to healthcare professionals in the UK. A detailed requirements analysis for intended users of the application revealed the need for an automated decision-support tool in which anonymised, individual patient data is entered and gastrointestinal cancer risk is calculated and displayed immediately, which lends itself to use in busy clinical settings. Human-centred design was employed to develop the solution, focusing on the users and their needs, whilst ensuring that they are provided with sufficient details to appropriately interpret the risk score. The IDIOM App has been developed using R Shiny as a web-based application enabling access from different platforms with updates that can be carried out centrally through the host server. The application has been evaluated through literature search, internal/external validation, code testing, risk analysis, and usability assessments. Legal notices, contact system with research and maintenance teams, and all the supportive information for the application such as description of the population and intended users have been embedded within the application interface. With the purpose of providing a guide of developing standalone software medical devices in academic setting, this paper aims to present the theoretical and practical aspects of developing, writing technical documentation, and certifying standalone software medical devices using the case of the IDIOM App as an example.
]]>Digital doi: 10.3390/digital2010006
Authors: Nafiz Sadman Md Manjurul Ahsan Abdur Rahman Zahed Siddique Kishor Datta Gupta
Decentralized Finance (DeFi) is an emerging and revolutionizing field with notable uncertainties of reliability to be used on a mass scale. On the other hand, Artificial Intelligence (AI) has proved to be a crucial helping tool in numerous domains. In this study, we present a systematic review of the utility of AI in DeFi in terms of impact, reliability, and security and conduct exhaustive analysis. The review was motivated by an in-depth investigation of recently published literature that prioritized AI and DeFi in their research. This research, like many prior studies, examined the articles in terms of impact, reliability, and security. In addition, a new relevance score is introduced to better comprehend the quality of the content. According to investigation, the combination of AI and DeFi is one of the trending research topics that lacks adequate interpretations of black-box methodologies. Furthermore, it was discovered that one of the primary issues in DeFi is security, and numerous technologies, including blockchain technology and machine learning approaches, have been used to minimize such challenges. We hope that the gap addressed throughout this review will give insights to future researchers and practitioners, ultimately leading to new research opportunities in AI to bridge the gap of trust between peers and make the integration of DeFi more agile in the near future.
]]>Digital doi: 10.3390/digital2010005
Authors: Olanrewaju Sanda Michalis Pavlidis Nikolaos Polatidis
Blockchain is now utilized by a diverse spectrum of applications and is proclaimed as a technological innovation that transforms the way that data are stored. This technology has the potential to transform the healthcare sector, especially the prevalent issues of patient’s data-privacy and fragmented healthcare data. However, there is no evidence-based effort to develop a readiness assessment framework for blockchain that combines all the different social and economic factors and involves all stakeholders. Based on a systematic literature review, the proposed framework is applied to Portugal’s healthcare sector and its applicability is outlined. The findings in this paper show the unique importance of regulators and the government in achieving a globally acceptable regulatory framework for the adoption of blockchain technology in healthcare and other sectors. The business entities and solution providers are ready to leverage the opportunities of blockchain, but the absence of a widely acceptable regulatory framework that protect stakeholders’ interests is slowing down the adoption of blockchain. There are several misconceptions regarding blockchain laws and regulations, which has slowed stakeholder readiness. This paper will be useful as a guideline and knowledge base to reinforce blockchain adoption.
]]>Digital doi: 10.3390/digital2010004
Authors: Anudari Batsaikhan Wolfgang Kurtz Stephan Hachinger
In citizen science, citizens are encouraged to participate in research, with web technologies promoting location-independent participation and broad knowledge sharing. In this study, web technologies were extracted from 112 citizen science projects listed on the “Bürger schaffen Wissen”. Four indicators on web technologies—Online platforms, Educational tools, Social media, and Data sharing between projects—were chosen to quantify the extent to which web technologies are used within citizen science projects. The results show that the use of web technologies is already very well established in both the natural and social science projects and only the possibilities for data sharing between projects are limited.
]]>Digital doi: 10.3390/digital2010003
Authors: Sandesh Pantha Sumina Shrestha Janette Collier
Internet usage may help promote the physical and mental health of older adults living in Residential Aged Care Facilities (RACF). There is little evidence of how these older citizens use internet services. This systematic review aims to explore the trends and factors contributing to internet use among aged care residents. A systematic search will be conducted on nine online databases—MEDLINE, EMBASE, PsycInfo, CINAHL, AgeLine, ProQuest, Web of Science, Scopus, and the Cochrane Library. Two reviewers will independently conduct title and abstract screening, full-text reading, critical appraisal, and data extraction. Any discrepancies will be resolved by consensus. Methodological risk of bias will be assessed using the Effective Public Health Practice Project measure and Joanna Briggs Institute checklist. We will report a narrative synthesis of the evidence. Information on factors contributing to internet use and their strength of association will be reported. If feasible, we will undertake a meta-analysis and meta-synthesis. Our review will provide information on the factors predicting internet use among older adults in residential aged care facilities. The evidence from this review will help to formulate further research objectives and, potentially, to design an intervention to trial internet access for these groups. (Protocol Registration: PROSPERO-CRD 42020161227).
]]>Digital doi: 10.3390/digital2010002
Authors: Zhe Gong Ruizhi Wang Guobin Xia
The research explores the efficiency of augmented reality (AR) technology as a tool for the positive triggering museum experience. This article shows the design, development, and evaluation of an AR prototype for information visualization based on a famous Chinese art piece named Along the River During the Qingming Festival. In total, 58 participants were invited to evaluate the prototype. Results suggest that AR technology can trigger users’ engagement, learning, meaningful experience, and emotional connection, and hence arouse their interest and learning process. These findings may interest researchers and designers to develop a broader range of innovative AR applications to prompt a digital learning experience.
]]>Digital doi: 10.3390/digital2010001
Authors: Dejan Grba
From a small community of pioneering artists who experimented with artificial intelligence (AI) in the 1970s, AI art has expanded, gained visibility, and attained socio-cultural relevance since the second half of the 2010s. Its topics, methodologies, presentational formats, and implications are closely related to a range of disciplines engaged in the research and application of AI. In this paper, I present a comprehensive framework for the critical exploration of AI art. It comprises the context of AI art, its prominent poetic features, major issues, and possible directions. I address the poetic, expressive, and ethical layers of AI art practices within the context of contemporary art, AI research, and related disciplines. I focus on the works that exemplify poetic complexity and manifest the epistemic or political ambiguities indicative of a broader milieu of contemporary culture, AI science/technology, economy, and society. By comparing, acknowledging, and contextualizing both their accomplishments and shortcomings, I outline the prospective strategies to advance the field. The aim of this framework is to expand the existing critical discourse of AI art with new perspectives which can be used to examine the creative attributes of emerging practices and to assess their cultural significance and socio-political impact. It contributes to rethinking and redefining the art/science/technology critique in the age when the arts, together with science and technology, are becoming increasingly responsible for changing ecologies, shaping cultural values, and political normalization.
]]>Digital doi: 10.3390/digital1040016
Authors: Georgios D. Styliaras
The paper presents the current state of using augmented reality (AR) in the sectors of food analysis and food promotion through products and orders. Based on an extensive literature review, 34 indicative augmented reality applications of various purposes, target audiences and implementations have been selected and presented. Applications are research-based, commercial, or oriented just for entertainment. Eight classification criteria are defined, especially for these applications, and used for presenting them, including content, context, execution scenario, markers, devices supported, implementation details and appeals based on evaluation, downloads, or sales. Additionally, 16 implementation and supportive platforms that have been used in the presented applications are described. The paper discusses advantages and limitations of current applications leading to proposals of further use of augmented reality in these food sectors towards a uniform handling of all parameters related to food processing, from production until consumption. These parameters include content use, design considerations, implementation issues, use of AR markers, etc.
]]>Digital doi: 10.3390/digital1040015
Authors: Dhiren A. Audich Rozita Dara Blair Nonnecke
Privacy policies play an important part in informing users about their privacy concerns by operating as memorandums of understanding (MOUs) between them and online services providers. Research suggests that these policies are infrequently read because they are often lengthy, written in jargon, and incomplete, making them difficult for most users to understand. Users are more likely to read short excerpts of privacy policies if they pertain directly to their concern. In this paper, a novel approach and a proof-of-concept tool are proposed that reduces the amount of privacy policy text a user has to read. It does so using a domain ontology and natural language processing (NLP) to identify key areas of the policies that users should read to address their concerns and take appropriate action. Using the ontology to locate key parts of privacy policies, average reading times were substantially reduced from 29–32 min to 45 s.
]]>Digital doi: 10.3390/digital1040014
Authors: Aristotelis Ballas Panagiotis Katrakazas
Since its inception by Jewett and Williston in the late 1960s, the auditory brainstem response (ABR) has been an indispensable diagnostic tool, used by audiologists around the world. Click-evoked ABR testing proves to be a reliable tool, as it provides an objective representation of the auditory function, an estimate of hearing thresholds and the ability to pinpoint a potential issue in the auditory neural pathway. The present study describes state-of-the-art ABR analytics-related platforms and provides an overview of their functionality. In conjunction, we introduce the design and development of a newly developed, user-friendly web application, built in R language. This application provides several well-known and newly key characteristics for the analysis of ABR waveforms. These include absolute peak latencies, amplitudes, and interpeak latencies.
]]>Digital doi: 10.3390/digital1040013
Authors: Panagiotis Radoglou Grammatikis Panagiotis Sarigiannidis Christos Dalamagkas Yannis Spyridis Thomas Lagkas Georgios Efstathopoulos Achilleas Sesis Ignacio Labrador Pavon Ruben Trapero Burgos Rodrigo Diaz Antonios Sarigiannidis Dimitris Papamartzivanos Sofia Anna Menesidou Giannis Ledakis Achilleas Pasias Thanasis Kotsiopoulos Anastasios Drosou Orestis Mavropoulos Alba Colet Subirachs Pol Paradell Sola José Luis Domínguez-García Marisa Escalante Molinuevo Martin Alberto Benito Caracuel Francisco Ramos Vasileios Gkioulos Sokratis Katsikas Hans Christian Bolstad Dan-Eric Archer Nikola Paunovic Ramon Gallart Theodoros Rokkas Alicia Arce
The technological leap of smart technologies and the Internet of Things has advanced the conventional model of the electrical power and energy systems into a new digital era, widely known as the Smart Grid. The advent of Smart Grids provides multiple benefits, such as self-monitoring, self-healing and pervasive control. However, it also raises crucial cybersecurity and privacy concerns that can lead to devastating consequences, including cascading effects with other critical infrastructures or even fatal accidents. This paper introduces a novel architecture, which will increase the Smart Grid resiliency, taking full advantage of the Software-Defined Networking (SDN) technology. The proposed architecture called SDN-microSENSE architecture consists of three main tiers: (a) Risk assessment, (b) intrusion detection and correlation and (c) self-healing. The first tier is responsible for evaluating dynamically the risk level of each Smart Grid asset. The second tier undertakes to detect and correlate security events and, finally, the last tier mitigates the potential threats, ensuring in parallel the normal operation of the Smart Grid. It is noteworthy that all tiers of the SDN-microSENSE architecture interact with the SDN controller either for detecting or mitigating intrusions.
]]>Digital doi: 10.3390/digital1030012
Authors: Nafissa Yusupova Diana Bogdanova Nadejda Komendantova Hossein Hassani
The topic of affective computing has been growing rapidly in recent times. In the last five years, the volume of publications in this field has tripled. The question arises which research trends are most in demand today. This can only be judged by analysing the publications that present the results of research. Since researchers have access to the entire global scientific publication space, the task of analysing big data arises. This leads to the problem of identifying the most significant results in the subject area of interest. This paper presents some results of the analysis of semi-structured information from scientific citation databases on the subject of “affective computing”.
]]>Digital doi: 10.3390/digital1030011
Authors: Kowshik Bhowmik Anca Ralescu
This article presents a systematic literature review on quantifying the proximity between independently trained monolingual word embedding spaces. A search was carried out in the broader context of inducing bilingual lexicons from cross-lingual word embeddings, especially for low-resource languages. The returned articles were then classified. Cross-lingual word embeddings have drawn the attention of researchers in the field of natural language processing (NLP). Although existing methods have yielded satisfactory results for resource-rich languages and languages related to them, some researchers have pointed out that the same is not true for low-resource and distant languages. In this paper, we report the research on methods proposed to provide better representation for low-resource and distant languages in the cross-lingual word embedding space.
]]>Digital doi: 10.3390/digital1030010
Authors: Vassilios Krassanakis
Gaze data visualization constitutes one of the most critical processes during eye-tracking analysis. Considering that modern devices are able to collect gaze data in extremely high frequencies, the visualization of the collected aggregated gaze data is quite challenging. In the present study, contiguous irregular cartograms are used as a method to visualize eye-tracking data captured by several observers during the observation of a visual stimulus. The followed approach utilizes a statistical grayscale heatmap as the main input and, hence, it is independent of the total number of the recorded raw gaze data. Indicative examples, based on different parameters/conditions and heatmap grid sizes, are provided in order to highlight their influence on the final image of the produced visualization. Moreover, two analysis metrics, referred to as center displacement (CD) and area change (AC), are proposed and implemented in order to quantify the geometric changes (in both position and area) that accompany the topological transformation of the initial heatmap grids, as well as to deliver specific guidelines for the execution of the used algorithm. The provided visualizations are generated using open-source software in a geographic information system.
]]>Digital doi: 10.3390/digital1020009
Authors: Tiril Sundby Julia Maria Graham Adil Rasheed Mandar Tabib Omer San
Digital twins are meant to bridge the gap between real-world physical systems and virtual representations. Both stand-alone and descriptive digital twins incorporate 3D geometric models, which are the physical representations of objects in the digital replica. Digital twin applications are required to rapidly update internal parameters with the evolution of their physical counterpart. Due to an essential need for having high-quality geometric models for accurate physical representations, the storage and bandwidth requirements for storing 3D model information can quickly exceed the available storage and bandwidth capacity. In this work, we demonstrate a novel approach to geometric change detection in a digital twin context. We address the issue through a combined solution of dynamic mode decomposition (DMD) for motion detection, YOLOv5 for object detection, and 3D machine learning for pose estimation. DMD is applied for background subtraction, enabling detection of moving foreground objects in real-time. The video frames containing detected motion are extracted and used as input to the change detection network. The object detection algorithm YOLOv5 is applied to extract the bounding boxes of detected objects in the video frames. Furthermore, we estimate the rotational pose of each object in a 3D pose estimation network. A series of convolutional neural networks (CNNs) conducts feature extraction from images and 3D model shapes. Then, the network outputs the camera orientation’s estimated Euler angles concerning the object in the input image. By only storing data associated with a detected change in pose, we minimize necessary storage and bandwidth requirements while still recreating the 3D scene on demand. Our assessment of the new geometric detection framework shows that the proposed methodology could represent a viable tool in emerging digital twin applications.
]]>Digital doi: 10.3390/digital1020008
Authors: Andreas Triantafyllidis Anastasios Alexiadis Konstantinos Soutos Thomas Fischer Konstantinos Votis Dimitrios Tzovaras
Physical inactivity in children is a major public health challenge, for which valid physical activity assessment tools are needed. Wearable devices provide a means for objective assessment of children’s physical activity, but they are often not adopted because of issues such as cost, comfort, and privacy. In this context, self-reporting tools could be employed, but their validity in relation to a child’s age is understudied. We present the agreement of one of the most popular self-reporting tools, the Physical Activity Questionnaire for Children (PAQ-C) with accelerometer-measured physical activity in 9-year-old versus 12-year-old children, wearing an accelerometer-based wearable device for seven consecutive days. We study the relationship between the PAQ-C and accelerometer scores using Spearman’s rank correlation coefficients and Bland–Altman plots in a sample of 131 children included for analysis. Overall, there was correlation between PAQ-C score and physical activity measures for the 12-year-old children (rho = 0.47 for total physical activity, rho = 0.43 for moderate-to-vigorous physical activity, rho = 0.41 for steps, p < 0.01), but not for the 9-year-old children (rho = 0.08 for total physical activity, rho = 0.21 for moderate-to-vigorous physical activity, rho = 0.19 for steps, p > 0.05). All PAQ-C items other than item 3 (activity at recess) did not reach significance in correlation with accelerometry for the 9-year-old children (p > 0.05). Therefore, the use of wearable devices for more objective assessment of physical activity in younger children should be preferred.
]]>Digital doi: 10.3390/digital1020007
Authors: Akshai Ramesh Venkatesh Balavadhani Parthasarathy Rejwanul Haque Andy Way
Phrase-based statistical machine translation (PB-SMT) has been the dominant paradigm in machine translation (MT) research for more than two decades. Deep neural MT models have been producing state-of-the-art performance across many translation tasks for four to five years. To put it another way, neural MT (NMT) took the place of PB-SMT a few years back and currently represents the state-of-the-art in MT research. Translation to or from under-resourced languages has been historically seen as a challenging task. Despite producing state-of-the-art results in many translation tasks, NMT still poses many problems such as performing poorly for many low-resource language pairs mainly because of its learning task’s data-demanding nature. MT researchers have been trying to address this problem via various techniques, e.g., exploiting source- and/or target-side monolingual data for training, augmenting bilingual training data, and transfer learning. Despite some success, none of the present-day benchmarks have entirely overcome the problem of translation in low-resource scenarios for many languages. In this work, we investigate the performance of PB-SMT and NMT on two rarely tested under-resourced language pairs, English-To-Tamil and Hindi-To-Tamil, taking a specialised data domain into consideration. This paper demonstrates our findings and presents results showing the rankings of our MT systems produced via a social media-based human evaluation scheme.
]]>Digital doi: 10.3390/digital1010006
Authors: Natália Resende Andy Way
In this article, we address the question of whether exposure to the translated output of MT systems could result in changes in the cognitive processing of English as a second language (L2 English). To answer this question, we first conducted a survey with 90 Brazilian Portuguese L2 English speakers with the aim of understanding how and for what purposes they use web-based MT systems. To investigate whether MT systems are capable of influencing L2 English cognitive processing, we carried out a syntactic priming experiment with 32 Brazilian Portuguese speakers. We wanted to test whether speakers re-use in their subsequent speech in English the same syntactic alternative previously seen in the MT output, when using the popular Google Translate system to translate sentences from Portuguese into English. The results of the survey show that Brazilian Portuguese L2 English speakers use Google Translate as a tool supporting their speech in English as well as a source of English vocabulary learning. The results of the syntactic priming experiment show that exposure to an English syntactic alternative through GT can lead to the re-use of the same syntactic alternative in subsequent speech even if it is not the speaker’s preferred syntactic alternative in English. These findings suggest that GT is being used as a tool for language learning purposes and so is indeed capable of rewiring the processing of L2 English syntax.
]]>Digital doi: 10.3390/digital1010005
Authors: Yannis Manolopoulos
Many decades back, Computer Science emerged as a new scientific discipline at the crossroads of mathematics, physics and engineering [...]
]]>Digital doi: 10.3390/digital1010004
Authors: Mikael Sjödahl Erik Olsson
The traceability of manufactured components is growing in importance with the greater use of digital service solutions offered and with an increased digitalization of manufacturing logistics. In this paper, we investigate the use of image-plane laser speckles as a tool to acquire a unique code from the surface of the component and the ability to use this pattern as a secure component-specific digital fingerprint. Intensity correlation is used as a numerical identifier. Metal sheets of different materials and steel pipes are considered. It is found that laser speckles are robust against surface alterations caused by surface compression and scratching and that the correct pattern reappears from a surface contaminated by oil after cleaning. In this investigation, the detectability is close to 100% for all surfaces considered, with zero false positives. The exception is a heavily oxidized surface wiped by a cotton cloth between recordings. It is further found that the main source for lost detectability is caused by misalignment between the registration and detection geometries where a positive match is lost by a change in angle in the order of 60 mrad. Therefore, as long as the registration and detection systems, respectively, use the same optical arrangement, laser speckles have the ability to serve as unique component identifiers without having to add extra markings or a dedicated sensor to the component.
]]>Digital doi: 10.3390/digital1010003
Authors: Mattia Tambaro Marta Bisio Marta Maschietto Alessandro Leparulo Stefano Vassanelli
Numerous experiments require low latencies in the detection and processing of the neural brain activity to be feasible, in the order of a few milliseconds from action to reaction. In this paper, a design for sub-millisecond detection and communication of the spiking activity detected by an array of 32 intracortical microelectrodes is presented, exploiting the real-time processing provided by Field Programmable Gate Array (FPGA). The design is embedded in the commercially available RHS stimulation/recording controller from Intan Technologies, that allows recording intracortical signals and performing IntraCortical MicroStimulation (ICMS). The Spike Detector (SD) is based on the Smoothed Nonlinear Energy Operator (SNEO) and includes a novel approach to estimate an RMS-based firing-rate-independent threshold, that can be tuned to fine detect both the single Action Potential (AP) and Multi Unit Activity (MUA). A low-latency SD together with the ICMS capability, creates a powerful tool for Brain-Computer-Interface (BCI) closed-loop experiments relying on the neuronal activity-dependent stimulation. The design also includes: A third order Butterworth high-pass IIR filter and a Savitzky-Golay polynomial fitting; a privileged fast USB connection to stream the detected spikes to a host computer and a sub-milliseconds latency Universal Asynchronous Receiver-Transmitter (UART) protocol communication to send detections and receive ICMS triggers. The source code and the instruction of the project can be found on GitHub.
]]>Digital doi: 10.3390/digital1010002
Authors: Diana Pérez-Marín
Pedagogic Conversational Agents (PCAs) can be defined as autonomous characters that cohabit learning environments with students to create rich learning interactions. Currently, there are many agents reported in the literature of this fast-evolving field. In this paper, several designs of PCAs used as instructors, students, or companions are reviewed using a taxonomy to analyze the possibilities that PCAs can bring into the classrooms. Finally, a discussion as to whether this technology could become the future of education depending on the design trends identified is open for any educational technology practitioner, researcher, teacher, or manager involved in 21st century education.
]]>Digital doi: 10.3390/digital1010001
Authors: Temidayo Otunniyi Hermanus Myburgh
With ever-increasing wireless network demands, low-complexity reconfigurable filter design is expected to continue to require research attention. Extracting and reconfiguring channels of choice from multi-standard receivers using a generalized discrete Fourier transform filter bank (GDFT-FB) is computationally intensive. In this work, a lower compexity algorithm is written for this transform. The design employs two different approaches: hybridization of the generalized discrete Fourier transform filter bank with frequency response masking and coefficient decimation method 1; and the improvement and implementation of the hybrid generalized discrete Fourier transform using a parallel distributed arithmetic-based residual number system (PDA-RNS) filter. The design is evaluated using MATLAB 2020a. Synthesis of area, resource utilization, delay, and power consumption was done on a Quartus 11 Altera 90 using the very high-speed integrated circuits (VHSIC) hardware description language. During MATLAB simulations, the proposed HGDFT algorithm attained a 66% reduction, in terms of number of multipliers, compared with existing algorithms. From co-simulation on the Quartus 11 Altera 90, optimization of the filter with PDA-RNS resulted in a 77% reduction in the number of occupied lookup table (LUT) slices, an 83% reduction in power consumption, and an 11% reduction in execution time, when compared with existing methods.
]]>