Authors: Peter C. Terry Renée L. Parsons-Smith Symeon P. Vlachopoulos Andrew M. Lane
Mood profile clusters have previously been identified in several cultural contexts. In the present study, six mood profile clusters referred to as the iceberg, inverse Everest, inverse iceberg, shark fin, submerged, and surface profiles, were investigated in a Greek population. The names of the mood profiles reflect how they appear after raw scores for Tension, Depression, Anger, Vigor, Fatigue, and Confusion (in that order), are converted to T-scores and depicted graphically. A Greek translation of the Brunel Mood Scale (BRUMS-Greek) was completed by 1786 adults, comprising 1417 exercise participants and 369 physically inactive adults (male = 578, female = 1208) aged 18–64 years (M = 34.73 ± 11.81 years). Although the male–female ratio emphasized females, sample sizes of over 500 suggest some degree of representativeness. Seeded k-means cluster analysis clearly identified the six hypothesized mood profiles. Men were over-represented for the iceberg profile. For age, the 18–25 years group were under-represented for the iceberg profile, whereas the 46–55 and 56+ years groups were over-represented. The 56+ years group were under-represented for the inverse Everest, and the 18–25 years group were over-represented for the shark fin profile. For body mass index (BMI), participants in the obese weight category were over-represented for the inverse iceberg and shark fin profiles and under-represented for the submerged profile. Active participants were over-represented for the iceberg and submerged profiles, and under-represented for the inverse Everest, inverse iceberg, and surface profiles. Findings supported the cross-cultural equivalence of the mood profile clusters and confirmed the link between physical inactivity, obesity, and negative mood profiles.
]]>Authors: Demetris Koutsoyiannis
Recent studies have provided evidence, based on analyses of instrumental measurements of the last seven decades, for a unidirectional, potentially causal link between temperature as the cause and carbon dioxide concentration ([CO2]) as the effect. In the most recent study, this finding was supported by analysing the carbon cycle and showing that the natural [CO2] changes due to temperature rise are far larger (by a factor > 3) than human emissions, while the latter are no larger than 4% of the total. Here, we provide additional support for these findings by examining the signatures of the stable carbon isotopes, 12 and 13. Examining isotopic data in four important observation sites, we show that the standard metric δ13C is consistent with an input isotopic signature that is stable over the entire period of observations (>40 years), i.e., not affected by increases in human CO2 emissions. In addition, proxy data covering the period after 1500 AD also show stable behaviour. These findings confirm the major role of the biosphere in the carbon cycle and a non-discernible signature of humans.
]]>Authors: Yong-Bi Fu Gregory W. Peterson Eviatar Nevo Ana Badea
Many studies have investigated the threat of climate change on wild plants, but few have investigated the genetic responses of crop wild relative populations under threat. We characterized the genetic responses of 10 wild barley (Hordeum spontaneum K. Koch) populations in Israel, sampling them in 1980 and again in 2008, through exome capture and RNA-Seq analyses. Sequencing 48 wild barley samples of these populations representing two collection years generated six million SNPs, and SNP annotations identified 12,926 and 13,361 deleterious SNPs for 1980 and 2008 samples, respectively. The assayed wild barley samples displayed intensified selective sweeps and elevated deleterious mutations across seven chromosomes in response to 28 years of global warming. On average, the 2008 samples had lower individual and population mutational burdens, but the population adaptation potential was estimated to be lower in samples from 2008 than in 1980. These findings highlight the genetic risks of losing wild barley under global warming and support the need to conserve crop wild relatives.
]]>Authors: Tsz Yan Joyce Chan Kevin C. Honeychurch
The electrochemical oxidation of levamisole, a glassy carbon electrode, was investigated over the pH range 2.0–10.0. Cyclic voltammetric investigations showed a single oxidation process was recorded, with a peak potential (Ep) shown to be pH-dependent in the range 5.0–8.0; between pH 2.0 and pH 5.0, and above pH 8.0, the Ep was found to be independent of pH, indicating apparent pKa values of 5.0 and 8.0. Peak currents were found to increase with increasing pH values. This voltammetric oxidation process was found to be consistent with a two-electron, two-proton oxidation to the corresponding sulfoxide. Based on these findings, the development of a of method based on the high-performance liquid chromatography separation of levamisole, with electrochemical detection being used for its determination, was explored. The chromatographic conditions required for the separation of levamisole were first investigated and optimized using UV detection. The conditions were identified as a 150 mm × 4.6 mm, 5 µm C18 column with a mobile phase consisting of 50% methanol, and 50%, 50 mM, pH 8.0 phosphate buffer. The technique of hydrodynamic voltammetry was applied to optimize the applied potential required for the determination of levamisole, identified as +2.3 V versus a stainless-steel pseudo-reference counter-electrode. Under the optimized conditions, levamisole exhibited a linear response of 1.00–20 mg/L (R2 = 0.999), with a detection limit of 0.27 mg/L. The possibility of determining levamisole in artificial urine was shown to be possible via simple dilution in the mobile phase. Mean recoveries of 99.7%, and 94.6%, with associated coefficients of variation of 8.2% and 10.2%, respectively, were obtained for 1.25 µg/mL (n = 5) and 2.50 µg/mL (n = 5).
]]>Authors: Bedassa Tadesse
The African Growth Opportunity Act (AGOA) has been a crucial trade and development initiative, offering preferential access to qualified Sub-Saharan African (SSA) countries to the United States market since its enactment in 2000. This paper presents a comprehensive review of scholarly articles and policy reports that analyze the impact of AGOA on the economic performance of SSA countries. Employing various econometric methods and data analysis techniques, researchers have investigated the effects of AGOA on trade flows, foreign direct investment (FDI) inflows, employment, economic growth, and poverty levels. The findings reveal that AGOA has positively affected the region’s trade, particularly in apparel, textiles, and agriculture. However, its influence on promoting export diversification and attracting FDI is nuanced, with substantial heterogeneity among the beneficiary countries and industries within each country. While some SSA countries have experienced substantial export growth and FDI inflows, others have not fully leveraged the benefits of AGOA due to absorptive capacity constraints and governance challenges. AGOA’s effectiveness in promoting broad-based employment, GDP growth, and poverty reduction remains an active area of inquiry, necessitating further research to understand the policy’s sustained impact and inform future trade policy designs for SSA countries.
]]>Authors: Raphaele Malheiro Aires Camões Gibson Meira Rui Reis Aline Nóbrega
Integrating waste and industrial by-products into concrete is an alternative way to reduce global cement consumption, enhancing its eco-friendliness. In this context, residues with fly ash have been increasingly utilised. Considering the vulnerability of concrete with fly ash to carbonation and, at the same time, its high resistance to chlorides, it is important to investigate the behaviour of these concretes under their combined actions. For this purpose, an experimental investigation was conducted, studying mortar and concrete specimens with 40% replacement of cement with fly ash. These specimens were subjected to a combination of actions (Cl− and CO2) in two phases: initially through immersion and drying tests, and subsequently through a combination of accelerated tests. Concerning the chloride impact study, free and total chloride profiles were studied. Concerning the impact of carbonation, colourimetric and chemical tests were used. The results demonstrate a significant influence of combined action not only on chloride penetration in cement-based materials with fly ash but also on the development of a carbonation front. Exposure of cement-based materials with fly ash to environments with high Cl− and CO2 content sequentially may lead, on the one hand, to an increase in carbonation resistance. However, on the other hand, it may result in a substantial reduction in chloride penetration resistance.
]]>Authors: Evangelos Stefanou Panagiotis Louvros Fotios Stefanidis Evangelos Boulougouris
Within the expansive domain of maritime safety, optimizing evacuation procedures stands as a critical endeavour. After all, evacuation is literally the last and fundamental safety level afforded to mariners and passengers. Recent incidents have rekindled interest in assessing the performance of this ultimate safety barrier. However, addressing evacuability requires a holistic approach. The authors present herein the setup, simulation, and ultimately evaluation of a novel approach and its ability to rigorously assess multiple innovative risk-control options in a challenging, realistic setting. Moreover, its benchmarking against conventional regulation-dictated evacuation processes is captured distinctively along with the relative effectiveness of each proposed measure. Such measures include smart technologies and procedural changes that can result in substantial improvements to the current procedures. These will impact the ongoing discourse on maritime safety by providing insights for policymakers, vessel operators, emergency planners, etc., and emphasize the need for further research and development efforts to fortify the industry against evolving safety challenges.
]]>Authors: Ruth Eniyepade Emberru Raj Patel Iqbal Mohammed Mujtaba Yakubu Mandafiya John
Petrochemical feedstocks are experiencing a fast growth in demand, which will further expand their market in the coming years. This is due to an increase in the demand for petrochemical-based materials that are used in households, hospitals, transportation, electronics, and telecommunications. Consequently, petrochemical industries rely heavily on olefins, namely propylene, ethylene, and butene, as fundamental components for their manufacturing processes. Presently, there is a growing interest among refineries in prioritising their operations towards the production of fuels, specifically gasoline, diesel, and light olefins. The cost-effectiveness and availability of petrochemical primary feedstocks, such as propylene and butene, can be enhanced through the direct conversion of crude oil into light olefins using fluid catalytic cracking (FCC). To achieve this objective, the FCC technology, process optimisation, and catalyst modifications may need to be redesigned. It is helpful to know that there are several documented methods of modifying traditional FCC catalysts’ physicochemical characteristics to enhance their selectivity toward light olefins’ production, since the direct cracking of crude oil to olefins is still in its infancy. Based on a review of the existing zeolite catalysts, this work focuses on the factors that need to be optimized and the approaches to modifying FCC catalysts to maximize light olefin production from crude oil conversion via FCC. Several viewpoints have been combined as a result of this research, and recommendations have been made for future work in the areas of optimising the yield of light olefins by engineering the pore structure of zeolite catalysts, reducing deactivation by adding dopants, and conducting technoeconomic analyses of direct crude oil cracking to produce light olefins.
]]>Authors: Amita Dessai Hassanali Virani
Emotion classification using physiological signals is a promising approach that is likely to become the most prevalent method. Bio-signals such as those derived from Electrocardiograms (ECGs) and the Galvanic Skin Response (GSR) are more reliable than facial and voice recognition signals because they are not influenced by the participant’s subjective perception. However, the precision of emotion classification with ECG and GSR signals is not satisfactory, and new methods need to be developed to improve it. In addition, the fusion of the time and frequency features of ECG and GSR signals should be explored to increase classification accuracy. Therefore, we propose a novel technique for emotion classification that exploits the early fusion of ECG and GSR features extracted from data in the AMIGOS database. To validate the performance of the model, we used various machine learning classifiers, such as Support Vector Machine (SVM), Decision Tree, Random Forest (RF), and K-Nearest Neighbor (KNN) classifiers. The KNN classifier gives the highest accuracy for Valence and Arousal, with 69% and 70% for ECG and 96% and 94% for GSR, respectively. The mutual information technique of feature selection and KNN for classification outperformed the performance of other classifiers. Interestingly, the classification accuracy for the GSR was higher than for the ECG, indicating that the GSR is the preferred modality for emotion detection. Moreover, the fusion of features significantly enhances the accuracy of classification in comparison to the ECG. Overall, our findings demonstrate that the proposed model based on the multiple modalities is suitable for classifying emotions.
]]>Authors: Achuth Jayakrishnan Wan Rosalina Wan Rosli Ahmad Rashidi Mohd Tahir Fashli Syafiq Abd Razak Phei Er Kee Hui Suan Ng Yik-Ling Chew Siew-Keah Lee Mahenthiran Ramasamy Ching Siang Tan Kai Bin Liew
Many beneficial proteins have limited natural availability, which often restricts their supply and thereby reduces their potential for therapeutic or industrial usage. The advent of recombinant DNA (rDNA) technology enables the utilization of different microbes as surrogate hosts to facilitate the production of these proteins. This microbial technology continues to evolve and integrate with modern innovations to develop more effective approaches for increasing the production of recombinant biopharmaceuticals. These strategies encompass fermentation technology, metabolic engineering, the deployment of strong promoters, novel vector elements such as inducers and enhancers, protein tags, secretion signals, synthetic biology, high-throughput devices for cloning, and process screening. This appraisal commences with a general overview regarding the manufacture of recombinant proteins by microbes and the production of biopharmaceuticals, their trends towards the development of biopharmaceuticals, and then discusses the approaches adopted for accomplishing this. The design of the upstream process, which also involves host selection, vector design, and promoter design, is a crucial component of production strategies. On the other hand, the downstream process focuses on extraction and purification techniques. Additionally, the review covers the most modern tools and resources, methods for overcoming low expression, the cost of producing biopharmaceuticals in microbes, and readily available recombinant protein products.
]]>Authors: Ahmad Yaman Abdin Claus Jacob
We are excited to share with you a crucial moment in the journey of Sci (ISSN 2413-4155) as we are announcing its new Aims and Scope [...]
]]>Authors: Bachir Benarba Khadidja Belhouala
The Bryonia genus (Cucurbitaceae) is divided into 13 plants considered medicinal species with a significant pharmacological value fortreating as well as preventing various ailments. The current systematic review aims to present useful and updated findings published onthis genus inthe last two decades. Based on PubMed, Science Direct, JSTOR, and Google Scholar, 42 of the available previous studies on Bryonia have been selected from 2000 to 2022. Thereafter, these studies were analyzed, summarized, and separately recorded according to the topic or section, adding some comments foreach. Our review provided a botanical description of the genus, followed by itsindigenous uses. Furthermore, more than 150 reported phytochemical compounds were grouped into families such as terpenoids, alkaloids, flavonoids, glycosides, saponins, and volatile oils. Hereby, thebiological activities part of this genus wereexposed, including itsantimicrobial, antioxidant, antidiabetic, antinociceptive, and anti-inflammatory functions, along with an interesting anticancer efficiency. Overall, our findings could contribute to forthcoming investigations that may lead to determining the responsible phytoconstituents for Bryonia’s efficiency.
]]>Authors: Alexander Machado Cardoso Carlos Vinicius Ferreira da Silva Vânia Lúcia de Pádua
Microorganisms play a fundamental role in sustainable agriculture, and their importance in common bean (Phaseolus vulgaris) cultivation cannot be underestimated. This review article aims to comprehensively explore the diverse roles of microorganisms in sustainable biofortified common bean cultivation. Biofortification refers to the process of increasing the nutrient content in crops, which helps combat deficiencies in iron, zinc, and vitamins in the human body. Biofortified beans have better agronomic characteristics and offer higher micronutrient content compared to conventional crops. We examine the contribution of various microbial communities in nitrogen fixation, soil structure improvement, nutrient recycling, and disease suppression. Understanding the interaction between beneficial microorganisms and biofortified common bean plants enables us to develop ecologically sound and sustainable approaches to optimize crop productivity and improve nutrition and livelihoods for millions of people worldwide while reducing the environmental impact of agricultural practices.
]]>Authors: Md Fahim Shahoriar Titu Abdul Aziz Chowdhury S. M. Rezwanul Haque Riasat Khan
The environmental physiognomy of an area can significantly diminish its aesthetic appeal, rendering it susceptible to visual pollution, the unbeaten scourge of modern urbanization. In this study, we propose using a deep learning network and a robotic vision system integrated with Google Street View to identify streets and textile-based visual pollution in Dhaka, the megacity of Bangladesh. The issue of visual pollution extends to the global apparel and textile industry, as well as to various common urban elements such as billboards, bricks, construction materials, street litter, communication towers, and entangled electric wires. Our data collection encompasses a wide array of visual pollution elements, including images of towers, cables, construction materials, street litter, cloth dumps, dyeing materials, and bricks. We employ two open-source tools to prepare and label our dataset: LabelImg and Roboflow. We develop multiple neural network models to swiftly and accurately identify and classify visual pollutants in this work, including Faster SegFormer, YOLOv5, YOLOv7, and EfficientDet. The tuna swarm optimization technique has been used to select the applied models’ final layers and corresponding hyperparameters. In terms of hardware, our proposed system comprises a Xiaomi-CMSXJ22A web camera, a 3.5-inch touchscreen display, and a Raspberry Pi 4B microcontroller. Subsequently, we program the microcontroller with the YOLOv5 model. Rigorous testing and trials are conducted on these deep learning models to evaluate their performance against various metrics, including accuracy, recall, regularization and classification losses, mAP, precision, and more. The proposed system for detecting and categorizing visual pollution within the textile industry and urban environments has achieved notable results. Notably, the YOLOv5 and YOLOv7 models achieved 98% and 92% detection accuracies, respectively. Finally, the YOLOv5 technique has been deployed into the Raspberry Pi edge device for instantaneous visual pollution detection. The proposed visual pollutants detection device can be easily mounted on various platforms (like vehicles or drones) and deployed in different urban environments for on-site, real-time monitoring. This mobility is crucial for comprehensive street-level data collection, potentially engaging local communities, schools, and universities in understanding and participating in environmental monitoring efforts. The comprehensive dataset on visual pollution will be published in the journal following the acceptance of our manuscript.
]]>Authors: Vladimir L. Gavrikov Alexey I. Fertikov Ruslan A. Sharafutdinov Zhonghua Tang Eugene A. Vaganov
This study explored whether consistent differences can be found between early-wood and late-wood in terms of elemental content of tree rings. The species to study was Pinus sylvestris L. growing within an even-aged stand planted during the early 1970s in eastern Siberia. The wood specimens were extracted from the north and south sides of trees and subsequently scanned through an X-ray fluorescent facility Itrax Multiscanner. A sequence of relatively wide tree-rings was chosen for the analysis. The scanning data on a number of elements (Al, Si, P, S, Cl, K, Ca, Ti, Mn, Fe, Cu, Zn, Sr, and Hg) were split into early-wood and late-wood data for each year of growth. The early- and late-wood data in the same ring were analyzed for basic statistics against each other as well as against available meteorological data. In the northern direction, the elements Al, Si, P, Cl, Cu, and Zn are always more abundant in the late-wood, while Ca, Fe, and Sr are always more abundant in the early-wood. What is important is how the differences for P, Ca, Fe, Cu, Zn, and Sr were always significant. The calcium content in the early-wood was the most consistently reflective regarding the meteorological data for the early summer (June). In some trees, the late-wood K content was well correlated with the Vysotskii–Ivanov climatic index. In the southern direction, Cu and Zn were always more abundant in the late-wood, while Sr was more abundant in the early-wood. The differences for all three elements were always significant. The cases of consistent relationships, though rare, help to develop a research program in the area of dendrochemistry.
]]>Authors: Emilio Ferrara
The significant advancements in applying artificial intelligence (AI) to healthcare decision-making, medical diagnosis, and other domains have simultaneously raised concerns about the fairness and bias of AI systems. This is particularly critical in areas like healthcare, employment, criminal justice, credit scoring, and increasingly, in generative AI models (GenAI) that produce synthetic media. Such systems can lead to unfair outcomes and perpetuate existing inequalities, including generative biases that affect the representation of individuals in synthetic data. This survey study offers a succinct, comprehensive overview of fairness and bias in AI, addressing their sources, impacts, and mitigation strategies. We review sources of bias, such as data, algorithm, and human decision biases—highlighting the emergent issue of generative AI bias, where models may reproduce and amplify societal stereotypes. We assess the societal impact of biased AI systems, focusing on perpetuating inequalities and reinforcing harmful stereotypes, especially as generative AI becomes more prevalent in creating content that influences public perception. We explore various proposed mitigation strategies, discuss the ethical considerations of their implementation, and emphasize the need for interdisciplinary collaboration to ensure effectiveness. Through a systematic literature review spanning multiple academic disciplines, we present definitions of AI bias and its different types, including a detailed look at generative AI bias. We discuss the negative impacts of AI bias on individuals and society and provide an overview of current approaches to mitigate AI bias, including data pre-processing, model selection, and post-processing. We emphasize the unique challenges presented by generative AI models and the importance of strategies specifically tailored to address these. Addressing bias in AI requires a holistic approach involving diverse and representative datasets, enhanced transparency and accountability in AI systems, and the exploration of alternative AI paradigms that prioritize fairness and ethical considerations. This survey contributes to the ongoing discussion on developing fair and unbiased AI systems by providing an overview of the sources, impacts, and mitigation strategies related to AI bias, with a particular focus on the emerging field of generative AI.
]]>Authors: Ahmed Salih Al-Khaleefa Ghazwan Fouad Kadhim Al-Musawi Tahseen Jebur Saeed
Current advancements in the technology of the Internet of Things (IoT) have led to the proliferation of various applications in the healthcare sector that use IoT. Recently, it has been shown that voice signal data of the respiratory system (i.e., breathing, coughing, and speech) can be processed through machine learning techniques to detect different diseases of this system such as COVID-19, considered an ongoing global pandemic. Therefore, this paper presents a new IoT framework for the identification of COVID-19 based on breathing voice samples. Using IoT devices, voice samples were captured and transmitted to the cloud, where they were analyzed and processed using machine learning techniques such as the naïve Bayes (NB) algorithm. In addition, the performance of the NB algorithm was assessed based on accuracy, sensitivity, specificity, precision, F-Measure, and G-Mean. The experimental findings showed that the proposed NB algorithm achieved 82.97% accuracy, 75.86% sensitivity, 94.44% specificity, 95.65% precision, 84.61% F-Measure, and 84.64% G-Mean.
]]>Authors: Roman Kornev Igor Gornushkin Lubov Shabarova Alena Kadomtseva Georgy Mochalov Nikita Rekunov Sergey Romanov Vitaly Medov Darya Belousova Nikita Maleev
The processes of hydrogen reduction of silicon and germanium chlorides under the conditions of high-frequency (40.68 MHz) counteracted arc discharge stabilized between two rod electrodes are investigated. The main gas-phase and solid products of plasma-chemical transformations are determined. Thermodynamic analysis of SiCl4 + H2 and GeCl4 + H2 systems for optimal process parameters was carried out. Using the example of hydrogen reduction of SiCl4 by the method of numerical modeling, gas-dynamic and thermal processes for this type of discharge are investigated. The impurity composition of gas-phase and solid reaction products is investigated. The possibility of single-stage production of high-purity Si and Ge mainly in the form of compact ingots, as well as high-purity chlorosilanes and trichlorogermane, is shown.
]]>Authors: Emma Yann Zhang Adrian David Cheok Zhigeng Pan Jun Cai Ying Yan
In recent years, generative transformers have become increasingly prevalent in the field of artificial intelligence, especially within the scope of natural language processing. This paper provides a comprehensive overview of these models, beginning with the foundational theories introduced by Alan Turing and extending to contemporary generative transformer architectures. The manuscript serves as a review, historical account, and tutorial, aiming to offer a thorough understanding of the models’ importance, underlying principles, and wide-ranging applications. The tutorial section includes a practical guide for constructing a basic generative transformer model. Additionally, the paper addresses the challenges, ethical implications, and future directions in the study of generative models.
]]>Authors: Gül Dikeç Simge Öztürk Neslihan Taşbaşı Damla Figenergül Bilal Buğrahan Güler
This study explored the future-oriented perceptions of Generation Z students in a foundation university. This study was conducted using qualitative research and a phenomenological design. The study sample consisted of 11 university students over the age of 18 who agreed to participate in the study. Data were collected online through individual interviews in Türkiye. Colaizzi’s phenomenological analysis method was used in the data analysis. The content analysis determined three main themes and eleven sub-themes. The first theme was the students’ knowledge acquisition about the “current situation of the country.” Under this theme were four sub-themes: economic problems, the immigrant situation, the education and justice system, and the country’s agenda. In the second theme, students shared their opinions about “being a student in the country.” This theme included economic impossibilities, their participation in limited social activities, and housing problems. In the last theme, “future anxiety,” the sub-themes of the students were found to include experiences hopelessness versus hope. Uncertainty caused anxiety, as did going abroad, finding a job, and improving themselves. It was determined that the participants were worried about the current situation in the countries they lived in during this period due to economic problems; while some were hopeful about the future, some were hopeless and would go abroad. This study might contribute to the literature on determining the future-oriented perceptions, possible stressors and hope levels of Generation Z university students in Türkiye. Additionally, intervention programs can be developed for the management these stressors to protect the mental health of Generation Z university students. On the other hand, it is necessary to protect the mental health of young people, who are the adults of the future, and to create policies for the youth of this country where social opportunities are maintained.
]]>Authors: Otthein Herzog Matthias Jarke Siegfried Zhiqiang Wu
Digital twins are emerging as a prime analysis, prediction, and control concepts for enabling the Industrie 4.0 vision of cyber-physical production systems (CPPSs). Today’s growing complexity and volatility cannot be handled by monolithic digital twins but require a fundamentally decentralized paradigm of cooperating digital twins. Moreover, societal trends such as worldwide urbanization and growing emphasis on sustainability highlight competing goals that must be reflected not just in cooperating but also competing digital twins, often even interacting in “coopetition”. This paper argues for multi-agent systems (MASs) to address this challenge, using the example of embedding industrial digital twins into an urban planning context. We provide a technical discussion of suitable MAS frameworks and interaction protocols; data architecture options for efficient data supply from heterogeneous sensor streams and sovereignty in data sharing; and strategic analysis for scoping a digital twin systems design among domain experts and decision makers. To illustrate the way still in front of research and practice, the paper reviews some success stories of MASs in Industrie/Logistics 4.0 settings and sketches a comprehensive vision for digital twin-based holistic urban planning.
]]>Authors: Evangelia Papaevangelou Zacharoula Papadopoulou Athanasios Mandroukas Yiannis Michaildis Pantelis Nikolaidis Nikos Margaritelis Thomas Metaxas
The aim of the present research was to investigate the variation in the anthropometric characteristics and the isokinetic muscle strength of elite female team sport players during a season (29–36 weeks). Three groups of female athletes that consisted of soccer (n = 19; age, 23.2 ± 4.3 years), basketball (n = 26, 21.1 ± 5.4 years) and handball players (n = 26, 21.1 ± 4.2 years) underwent anthropometric and isokinetic measurements at the beginning of the preparation period, in the middle and at the end of the competitive season. Isokinetic peak torque values of the hamstrings (H) and quadriceps (Q), as well as the conventional strength ratios of H:Q, were tested on an isokinetic dynamometer at angular velocities of 60, 180 and 300°·s−1. Body weight, lean body mass and body fat of all groups decreased from the first to the third testing session (p < 0.05). Isokinetic peak torque gradually increased during the three measurements (p < 0.05). The soccer players had lower body weight and body fat compared to the basketball and handball players (p < 0.05). Isokinetic peak torque in knee flexion did not show any difference between the sports at any angular velocity or knee movement (flexion and extension), with an exception of the 180°·s−1. The improvement observed for all athletes can be attributed to the training programs that collectively characterize these team sports.
]]>Authors: Rosmaya Dewi Norazanita Shamsuddin Muhammad Saifullah Abu Bakar Sutarat Thongratkaew Kajornsak Faungnawakij Muhammad Roil Bilad
River water can be used as a source of drinking water. However, it is vital to consider the existence of natural organic matter (NOM) and its possible influence on water quality (low turbidity, high color). The level of NOM in river water significantly impacts the ecosystem’s health and the water’s quality, and needs to be removed. A membrane-based approach is attractive for treating NOM successfully, but is still hindered by the membrane fouling problem. This study aims to develop polyvinylidene fluoride (PVDF)-based membranes customized for NOM removal from river water. The anti-fouling property was imposed by a coating of tannic acid (TA) and Fe3+ on the pre-prepared PVDF membrane. The results show that the TA–Fe coatings were effective, as demonstrated by the FTIR spectra, SEM, and EDS data. The coatings made the membrane more hydrophilic, with smaller pore size and lower clean water permeability. Such properties offer enhanced NOM rejections (up to 100%) and remarkably higher fouling recovery (up to 23%), desirable for maintaining a long-term filtration performance.
]]>Authors: Hamed Taherdoost
Blockchain offers a cutting-edge solution for storing medical data, carrying out medical transactions, and establishing trust for medical data integration and exchange in a decentralized open healthcare network setting. While blockchain in healthcare has garnered considerable attention, privacy and security concerns remain at the center of the debate when adopting blockchain for information exchange in healthcare. This paper presents research on the subject of blockchain’s privacy and security in healthcare from 2017 to 2022. In light of the existing literature, this critical evaluation assesses the current state of affairs, with a particular emphasis on papers that deal with practical applications and difficulties. By providing a critical evaluation, this review provides insight into prospective future study directions and advances.
]]>Authors: Martin Lindner Lukas Bank Johannes Schilp Matthias Weigold
Digital twins are among the technologies that are considered to have high potential. At the same time, there is no uniform understanding of what this technology means. Definitions are used across disciplinary boundaries, resulting in a multitude of different interpretations. The concepts behind the terms should be clearly named to transfer knowledge and bundle developments in digitalization. In particular, the Reference Architectural Model for Industry (RAMI) 4.0, as the guiding concept of digitalization, should be in harmony with the terms to be able to establish a contradiction-free relationship. This paper therefore summarizes the most important definitions and descriptions from the scientific community. By evaluating the relevant literature, a concept is derived. The concept presented in this work concretizes the requirements and understanding of digital twins in the frame of RAMI 4.0 with a focus on manufacturing. It thus contributes to the understanding of the technology. In this way, the concept is intended to contribute to the implementation of digital twins in this context.
]]>Authors: Jerica Wilson Katerina Evangelou Youhai H. Chen Hai-Feng Ji
Context: Chronic inflammation has been linked to cancer since the 19th century. Tumor growth is supported by the proangiogenic factors that chronic inflammation requires. Polarized leukocytes initiate these angiogenic and tumorigenic factors. TIPE2, a transport protein, manages the cytoskeletal rearrangement that gives a polarized leukocyte its motility. Inhibition of this protein could lead to a therapeutic option for solid tumor cancers; however, no such inhibitors have been developed so far due to the large cavity size of the TIPE2 protein. Here we have examined possible small molecule inhibitors by combining structure-based and fragment-based drug design approaches. The highest binding ligands were complexed with the protein, and fragment libraries were docked with the complex with the intention of linking the hit compounds and fragments to design a more potent ligand. Three hit compounds were identified by in silico structure-based screening and a linked compound, C2–F14, of excellent binding affinity, was identified by linking fragments to the hit compounds. C2–F14 demonstrates good binding stability in molecular dynamic simulations and great predicted ADME properties. Methods: High throughput molecular docking calculations of mass libraries were performed using AutoDock Vina 1.1.2. Molecular docking of individual ligands was performed using AutoDock Vina with PyRx. Ligand libraries were prepared using OpenBabel, linked ligands were prepared using Avogadro. The protein was prepared using AutoDockTools-1.5.6. Protein-ligand complexes were visualized with PyMOL. Two- and three-dimensional representations of protein–ligand interactions were plotted with BIOVIA Discovery Studio Visualizer. In silico absorption, distribution, metabolism, and excretion (ADME) properties were calculated using SwissADME. Molecular dynamics simulations were conducted with GROMACS.
]]>Authors: Marija Šimat Mateja Janković Makek Maja Mičetić
The aim of this research is to present the effects of acupuncture treatment on morning blood glucose level (BGL) in type 2 diabetes mellitus (T2DM) patients, and to describe them by a predictive model. The morning BGL is measured after overnight fasting during a three-month long acupuncture treatment for two persons diagnosed with T2DM and is compared with the BGL of two persons in similar health conditions taking only metformin-based drugs. It is shown that the morning BGL is highly affected by each single acupuncture treatment and by the number of the already applied treatments. Significant lowering of BGL after each treatment is observed, as well as an overall BGL lowering effect, which is the result of the repeated acupuncture. The observed BGL reduction was found to be maintained during a follow-up performed a year after the acupuncture. The measured BGL dynamics curves are analyzed and described by a model. This model describes well all of the key features of the measured BGL dynamics and provides personal parameters that describe the BGL regulation. The model is used to simulate BGL regulation by acupuncture performed with different frequencies. It can be used generally to predict the effects of acupuncture on BGL and to optimize the time between two treatments. The results will enable a better understanding of acupuncture application in diabetes, and a prediction of its effects in diabetes treatment.
]]>Authors: Tosin Adewumi Sana Sabah Sabry Nosheen Abid Foteini Liwicki Marcus Liwicki
We conduct relatively extensive investigations of automatic hate speech (HS) detection using different State-of-The-Art (SoTA) baselines across 11 subtasks spanning six different datasets. Our motivation is to determine which of the recent SoTA models is best for automatic hate speech detection and what advantage methods, such as data augmentation and ensemble, may have on the best model, if any. We carry out six cross-task investigations. We achieve new SoTA results on two subtasks—macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020 dataset, surpassing previous SoTA scores of 51.52% and 26.52%, respectively. We achieve near-SoTA results on two others—macro F1 scores of 81.66% for subtask A of the OLID 2019 and 82.54% for subtask A of the HASOC 2021, in comparison to SoTA results of 82.9% and 83.05%, respectively. We perform error analysis and use two eXplainable Artificial Intelligence (XAI) algorithms (Integrated Gradient (IG) and SHapley Additive exPlanations (SHAP)) to reveal how two of the models (Bi-Directional Long Short-Term Memory Network (Bi-LSTM) and Text-to-Text-Transfer Transformer (T5)) make the predictions they do by using examples. Other contributions of this work are: (1) the introduction of a simple, novel mechanism for correcting Out-of-Class (OoC) predictions in T5, (2) a detailed description of the data augmentation methods, and (3) the revelation of the poor data annotations in the HASOC 2021 dataset by using several examples and XAI (buttressing the need for better quality control). We publicly release our model checkpoints and codes to foster transparency.
]]>Authors: Seyedmajid Hosseini Mohsen Norouzi Jian Xu
Strain sensors play a pivotal role in quantifying stress and strain across diverse domains, encompassing engineering, industry, and medicine. Their applicability has recently extended into the realm of wearable electronics, enabling real-time monitoring of body movements. However, conventional strain sensors, while extensively employed, grapple with limitations such as diminished sensitivity, suboptimal tensile strength, and susceptibility to environmental factors. In contrast, polymer-based composite strain sensors have gained prominence for their capability to surmount these challenges. The integration of carbon nanotubes (CNTs) as reinforcing agents within the polymer matrix ushers in a transformative era, bolstering mechanical strength, electrical conductivity, and thermal stability. This study comprises three primary components: simulation, synthesis of nanocomposites for strain sensor fabrication, and preparation of a comprehensive measurement set for testing purposes. The fabricated strain sensors, incorporating a robust polymer matrix of polyaniline known for its exceptional conductivity and reinforced with carbon nanotubes as strengthening agents, demonstrate good characteristics, including a high gauge factor, stability, and low hysteresis. Moreover, they exhibit high strain sensitivity and show linearity in resistance changes concerning applied strain. Comparative analysis reveals that the resulting gauge factors for composite strain sensors consisting of carbon nanotubes/polyaniline and carbon nanotubes/polyaniline/silicone rubber are 144.5 and 167.94, respectively.
]]>Authors: Demetris Koutsoyiannis Christian Onof Zbigniew W. Kundzewicz Antonis Christofides
The scientific and wider interest in the relationship between atmospheric temperature (T) and concentration of carbon dioxide ([CO2]) has been enormous. According to the commonly assumed causality link, increased [CO2] causes a rise in T. However, recent developments cast doubts on this assumption by showing that this relationship is of the hen-or-egg type, or even unidirectional but opposite in direction to the commonly assumed one. These developments include an advanced theoretical framework for testing causality based on the stochastic evaluation of a potentially causal link between two processes via the notion of the impulse response function. Using, on the one hand, this framework and further expanding it and, on the other hand, the longest available modern time series of globally averaged T and [CO2], we shed light on the potential causality between these two processes. All evidence resulting from the analyses suggests a unidirectional, potentially causal link with T as the cause and [CO2] as the effect. That link is not represented in climate models, whose outputs are also examined using the same framework, resulting in a link opposite the one found when the real measurements are used.
]]>Authors: Laura Stefani Goffredo Orlandi Marco Corsi Edoardo Falconi Roberto Palazzo Alessio Pellegrino Pietro Amedeo Modesti
Background: Transplanted patients are frail individuals who may be affected by diastolic dysfunction, leading to a decrease in exercise tolerance. Previous studies have reported that certain ECG and echocardiographic parameters (such as the P-wave interval, PQ interval, P-wave dispersion, Tend-P interval, QTc interval, and strain) can support the diagnosis of diastolic dysfunction when the ejection fraction is preserved. This study aimed to examine the potential diagnostic contribution of specific ECG and deformation parameters in transplanted recipients, who are at a high risk of heart failure. Materials and Methods: A group of 33 transplanted subjects (17 renal and 16 liver) were categorized using two scores for heart failure with preserved ejection fraction (HFpEF). Additionally, they underwent evaluation based on ECG parameters (P-wave interval, PQ interval, Pwave dispersion, and Tend-P QTc) and echocardiographic deformation parameters (strain and twist). The Student’s t-test was used for statistical analysis. Results: The two scores identified different numbers of excludable and not excludable subjects potentially affected by HFpEF. The not excludable group presented ECG parameters with significantly higher values (P-wave, PQ interval, posterior wall diastole, and Tend-P, all with p ≤ 0.05) and significantly lower 4D strain and twist values (p < 0.05) Conclusions: There is evidence for a significant diagnostic contribution of additional ECG and echo strain parameters in an early phase of diastolic dysfunction in subjects potentially affected by HFpEF.
]]>Authors: Evangelos Bellos
Power plants constitute the main sources of electricity production, and the calculation of their efficiency is a critical factor that is needed in energy studies. The efficiency improvement of power plants through the optimization of the cycle is a critical means of reducing fuel consumption and leading to more sustainable designs. The goal of the present work is the development of semi-empirical models for estimating the thermodynamic efficiency of power cycles. The developed model uses only the lower and the high operating temperature levels, which makes it flexible and easily applicable. The final expression is found by using the literature data for different power cycles, named as: organic Rankine cycles, water-steam Rankine cycles, gas turbines, combined cycles and Stirling engines. According to the results, the real operation of the different cases was found to be a bit lower compared to the respective endoreversible cycle. Specifically, the present global model indicates that the thermodynamic efficiency is a function of the temperature ratio (low cycle temperature to high cycle temperature). The suggested equation can be exploited as a quick and accurate tool for calculating the thermodynamic efficiency of power plants by using the operating temperature levels. Moreover, separate equations are provided for all of the examined thermodynamic cycles.
]]>Authors: Hari Srivastava Hare Nigam Swagata Nandy
In this paper, we analyze the convergence problems of function g of Fourier series in Besov and generalized Zygmund norms using generalized Nörlund-Matrix (Np,qA) means of Fourier series. Convergence results are also compared by means of applications.
]]>Authors: Zacharias Frontistis Grigoris Lykogiannis Anastasios Sarmpanis
Among different biological methods used for advanced wastewater treatment, membrane bioreactors have demonstrated superior efficiency due to their hybrid nature, combining biological and physical processes. However, their efficient operation and control remain challenging due to their complexity. This comprehensive review summarizes the potential of artificial neural networks (ANNs) to monitor, simulate, optimize, and control these systems. ANNs show a unique ability to reveal and simulate complex relationships of dynamic systems such as MBRs, allowing for process optimization and fault detection. This early warning system leads to increased reliability and performance. Integrating ANNs with advanced algorithms and implementing Internet of Things (IoT) devices and new-generation sensors has the potential to transform the advanced wastewater treatment landscape towards the development of smart, self-adaptive systems. Nevertheless, several challenges must be addressed, including the need for high-quality and large-quantity data, human resource training, and integration into existing control system facilities. Since the demand for advanced water treatment and water reuse will continue to expand, proper implementation of ANNs, combined with other AI tools, is an exciting strategy toward the development of integrated and efficient advanced water treatment schemes.
]]>Authors: Rui Cereja Joana P. C. Cruz Joshua Heumüller Bernardo Vicente Ana Amorim Frederico Carvalho Sara Cabral Paula Chainho Ana C. Brito Inês J. Ferreira Mário Diniz
Bivalves accumulate toxins produced by microalgae, thus becoming harmful for humans. However, little information is available about their toxicity to the bivalve itself. In the present work, the physiological stress and damage after the ingestion of toxic dinoflagellate species (Gymnodinium catenatum) and a diatom species (Skeletonema marinoi, which is non-toxic to humans but may be to grazers) in the oyster Magallana angulata are evaluated against a control treatment fed with the chlorophyte Tetraselmis sp. Oysters were exposed for two hours to a concentration of 4 × 104 cells/L of G. catenatum and 2 × 107 cells/L of S. marinoi. The biomarkers superoxide dismutase (SOD), catalase (CAT), glutathione S-Transferase, total Ubiquitin (Ubi) and Acetylcholinesterase (AchE) were assessed. The exposure of M. angulata to G. catenatum lead to a reduction in SOD and AchE activity and ubiquitin concentrations when compared to the control treatment. Moreover, it increased CAT activity in the adductor muscle, and maintained its activity in the other tissues tested. This may be related to the combination of reduced metabolism with the deployment of detoxification processes. S. marinoi also lead to a decrease in all biomarkers tested in the gills and digestive glands. Therefore, both species tested caused physiological alterations in M. angulata after two hours of exposure.
]]>Authors: Johannes Winter
For a long time, the challenge has been to provide products and services that precisely match the preferences, habits, and needs of users [...]
]]>Authors: Florian Butollo Jana Flemming Christine Gerber Martin Krzywdzinski David Wandjo Nina Delicat Lorena Herzog
Academic studies prior to the pandemic rather emphasized that the progression towards Industry 4.0 happened in an incremental manner. However, the extraordinary circumstances of the pandemic have led to considerable investments that were widely interpreted as a (generalized) digitalization push. However, little is known about the character of such investments and their effects. The goal of this contribution is to provide an empirically based overview of recent investment in digital technologies in six economic sectors of the German economy: mechanical engineering, chemicals, automotives, logistics, healthcare, and financial services. Based on 36 case studies and a survey at 540 companies, we investigate the following questions: 1. How much did the COVID-19 pandemic reduce existing obstacles for investments in digitalization measures? 2. Is there a universal digitalization push due to the COVID-19 pandemic that differs from the trajectory before the pandemic? The results show that the pandemic affected investment in an unequal manner. It was driven by the immediate need to sustain business operations through the virtualization of communication among employees and with external partners. However, there was less dynamism in shop-floor-related digitalization, as it was less related to epidemiological concerns and is more long-term in nature.
]]>Authors: Gesualdo M. Zucco Giuseppe Sartori
Malingering relates to intentionally pretending or exaggerating physical or psychologic symptoms to gain an external incentive, such as avoiding work, law prosecution or military service, or seeking financial compensation from insurance companies. Accordingly, various techniques have been developed in recent years by the scientific community to address this challenge. In this review, we discuss malingering within visual, auditory and olfactory domains, as well as in cognitive disorders and psychopathology. We provide a general, critical, narrative overview on the intermodal criteria for differential diagnosis, and discuss validated psychophysical tools and electrophysiology-based tests for its detection, as well as insights for future directions.
]]>Authors: Jamshed Bobokalonov Yanhong Liu Karley K. Mahalak Jenni A. Firrman Shiowshuh Sheen Siyuan Zhou LinShu Liu
Tomatoes are a perishable and seasonal fruit with a high economic impact. Carbon dioxide (CO2), among several other reagents, is used to extend the shelf-life and preserve the quality of tomatoes during refrigeration or packaging. To obtain insight into CO2 stress during tomato ripening, tomatoes at the late green mature stage were conditioned with one of two CO2 delivery methods: 5% CO2 for 14 days (T1) or 100% CO2 for 3 h (T2). Conventional physical and chemical characterization found that CO2 induced by either T1 or T2 delayed tomato ripening in terms of color change, firmness, and carbohydrate dissolution. However, T1 had longer-lasting effects. Furthermore, ethylene production was suppressed by CO2 in T1, and promoted in T2. These physical observations were further evaluated via RNA-Seq analysis at the whole-genome level, including genes involved in ethylene synthesis, signal transduction, and carotenoid biosynthesis. Transcriptomics analysis revealed that the introduction of CO2 via the T1 method downregulated genes related to fruit ripening; in contrast, T2 upregulated the gene encoding for ACS6, the enzyme responsible for S1 ethylene synthesis, even though there was a large amount of ethylene present, indicating that T1 and T2 regulate tomato ripening via different mechanisms. Quantitative real-time PCR assays (qRT-PCR) were used for validation, which substantiated the RNA-Seq data. The results of the present research provide insight into gene regulation by CO2 during tomato ripening at the whole-genome level.
]]>Authors: Lea M. Morath Roger J. Guillory Alexander A. Oliver Shu Q. Liu Martin L. Bocks Galit Katarivas Levy Jaroslaw W. Drelich Jeremy Goldman
Platinum-containing stents are commonly used in humans with hypercholesterolemia, whereas preclinical stent evaluation has commonly been performed in healthy animal models, providing inadequate information about stent performance under hypercholesterolemic conditions. In this investigation, we used an ApoE−/− mouse model to test the impact of hypercholesterolemia on neointima formation on platinum-containing implants. We implanted 125 μm diameter platinum wires into the abdominal aortas of ApoE−/− and ApoE+/+ mice for 6 months, followed by histological and immunofluorescence examination of neointimal size and composition. It was found that ApoE−/− mice developed neointimas with four times larger area and ten times greater thickness than ApoE+/+ counterparts. Neointimas developed in the ApoE−/− mice also contained higher amounts of lipids quantified as having 370 times more coverage compared to ApoE+/+, a 3-fold increase in SMCs, and a 22-fold increase in macrophages. A confluent endothelium had regenerated in both mouse strains. The ApoE−/− mice experienced luminal reductions more closely resembling clinically relevant restenosis in humans. Overall, the response to platinum arterial implants was highly dependent upon the atherogenic environment.
]]>Authors: Mai M. Awad Randall B. Boone
Apis mellifera L. is considered one of the most important pollinators in nature. Unfortunately, in addition to other insect species, honey bee populations are decreasing at an alarming rate, urging researchers to investigate the causes and stressors that precipitated this decline. This study focuses on chemical stressors that are found to affect bee populations. We used pollen and honey samples to examine the variations in pesticides, selenium, and heavy metals in two different landscapes: urban and agricultural areas of northeastern Colorado, USA. Subsequently, we extrapolated the risks of these toxins’ residues to Apis spp. Based on the current literature, we found no spatial variations in metal and selenium concentrations in the pollen and honey samples collected from urban and agricultural areas. Moreover, we observed no spatial variations in pesticide concentrations in pollen and honey samples. Based on the previous literature and a comparison of the residues of heavy metals, selenium, and pesticides in our pollen and honey samples, we found that the heavy metal and selenium residues in some honey and pollen likely pose a severe health risk to honey bees. Although the levels of pesticide residues were below the documented thresholds of risk, we consider the possibility of synergistic chemical impacts. Our findings support future efforts to investigate the health risks associated with multiple-factor combinations.
]]>Authors: Rupak Kumar Das Anna Martin Tom Zurales Dale Dowling Arshia Khan
Electroencephalography (EEG) is a mechanism to understand the brain’s functioning by analyzing brain electrical signals. More recently, it has been more commonly used in studies that are focused on the causation and effect of dementia. More tools are now available to gather EEG data. This brings about the challenge of understanding brain signals, which involves signal processing. Professionals with an electrical engineering background are very comfortable analyzing EEG data. Still, scientists in computer science and related fields need a source that can identify all the tools available and the process of analyzing the data. This paper deals specifically with the existing EEG data analysis tools and the processes involved in analyzing the EEG data using these tools. Furthermore, the paper goes in-depth into identifying the tools and the mechanisms of data processing techniques. In addition, it lists a set of definitions required for a better understanding of EEG data analysis, which can be challenging. The purpose of this paper is to serve as a reference for not only scientists that are new to EEG data analysis but also seasoned scientists that are looking for a specific data component in EEG and can go straight to the section of the paper that deals with the tool that they are using.
]]>Authors: Jens Neuhüttler Maximilian Feike Janika Kutz Christian Blümel Bernd Bienzeisler
In recent years, a complex set of dynamic developments driven by both the economy and the emergence of digital technologies has put pressure on manufacturing companies to adapt. The concept of servitization, i.e., the shift from a product-centric to a service-centric value creation logic, can help manufacturing companies stabilize their business in such volatile times. Existing academic literature investigates the potential and challenges of servitization and the associated development of data-based services, so-called smart services, with a view to external market performance. However, with the increasing use of digital technologies in manufacturing and the development of internal smart services based on them, we argue that the existing insights on external servitization are also of interest for internal transformation. In this paper, we identify key findings from service literature, apply them to digital factory transformation, and structure them into six fields of action along the dimensions of people, technology, and organization. As a result, recommendations for designing digital factory transformation in manufacturing companies are derived from the perspective of servitization and developing internal smart services.
]]>Authors: Giovanna Ricci Filippo Gibelli Paolo Bailo Anna Maria Caraffa Maria Angela Casamassima Ascanio Sirignano
Hoarding disorder (HD) is a recently recognized psychiatric condition, now classified under the category of obsessive-compulsive and related disorders in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). It leads to an unwarranted attachment to material possessions, such that the individual is unable to separate themselves from them. There is still a lack of awareness of the critical sociological implications of this disorder, which is too often considered a purely health-related issue. This article endeavors to frame hoarding disorder from a unique socio-criminological and legal perspective, proposing an alternative approach to HD that considers it not only as a mental disorder, but also as a genuine societal issue. We also explore potential avenues for protection, considering both the well-being of individuals with this mental disorder and the communities in which individuals suffering from HD reside. This paper presents a fresh perspective on HD, aiming to delineate its impact and significance as an affliction affecting both individuals and society at large.
]]>Authors: Pantelis Nikolaidis Konstantinos Havenetidis
Strenuous exercise, such as military training, is known to demand a high degree of physical performance and to cause injuries. The present study aimed to (a) monitor the incidence of soft tissue injuries (blisters, contusions, and lacerations) among cadets during Basic Combat Training (BCT), and (b) identify possible risk factors for these injuries. Participants were 315 first-grade cadets (women, n = 28; men, n = 287), recruited from the Hellenic Army Academy. Seven weeks of BCT resulted in an overall cadet injury rate of 24.1% (n = 76) with 13.7% being injured one time, whereas 10.4% of participants were injured 2–6 times. The incidence of injuries was 2.9 soft tissue injuries per 1000 training hours. The logistic regression model using sex, being an athlete, nationality, weight, height, body mass index, and percentage of body fat (BF) to predict soft tissue injury was not statistically significant (χ2(7) = 5.315, p = 0.622). The results of this study showed that BCT caused a large number of soft tissue injuries similar to the number reported for musculoskeletal injuries. In conclusion, following BCT, soft tissue injury characteristics (occurrence, severity, treatment) are similar to those applied in musculoskeletal injuries for Army cadets. However, risk factors such as sex, nationality, and BF have not been related to soft tissue injury prediction as previously shown for musculoskeletal injuries for the same sample group.
]]>Authors: Raghav V. Anand Maysam F. Abbod Shou-Zen Fan Jiann-Shing Shieh
The term “anesthetic depth” refers to the extent to which a general anesthetic agent sedates the central nervous system with specific strength concentration at which it is delivered. The depth level of anesthesia plays a crucial role in determining surgical complications, and it is imperative to keep the depth levels of anesthesia under control to perform a successful surgery. This study used electroencephalography (EEG) signals to predict the depth levels of anesthesia. Traditional preprocessing methods such as signal decomposition and model building using deep learning were used to classify anesthetic depth levels. This paper proposed a novel approach to classify the anesthesia levels based on the concept of time series feature extraction, by finding out the relation between EEG signals and the bi-spectral Index over a period of time. Time series feature extraction on basis of scalable hypothesis tests were performed to extract features by analyzing the relation between the EEG signals and Bi-Spectral Index, and machine learning models such as support vector classifier, XG boost classifier, gradient boost classifier, decision trees and random forest classifier are used to train the features and predict the depth level of anesthesia. The best-trained model was random forest, which gives an accuracy of 83%. This provides a platform to further research and dig into time series-based feature extraction in this area.
]]>Authors: Antonio Sarasa-Cabezuelo
Violence involving firearms in the USA is a very important problem. As a consequence, a large number of crimes of this type are recorded every year. However, the solutions proposed have not managed to reduce the number of this type of crime. One of the cities with a large number of violent crimes is New York City. The number of crimes is not homogeneous and depends on the district where they occur. This paper proposes to study the information about the crimes in which firearms are involved with the aim of characterizing the factors on which the occurrence of this type of crime depends, such as the levels of poverty and culture. Since the districts are not homogeneous, the information has been analyzed at the district level. For this, data from the open data portal of the city of New York have been used and machine-learning techniques have been used. The results have shown that the variables on which they depend are different in each district.
]]>Authors: Silvia Brunoro Lisa Mensi
The access to basic healthcare for people who are not registered in the national health system is nowadays a very urgent problem, both in Italy and in the rest of the world. Immigration and poverty are only some of the factors that make one of the primary rights of humanity—healthcare—not a right for everyone. The main problems, which have grown exponentially in the last decade, are at operational level, due to the lack of personnel (mostly volunteers) and the lack of spaces. This paper illustrates procedures and techniques for the design of a small emergency structure that can be moved and positioned in urban contexts. The first part consists of a deep analysis of the problem and of the state of the art of existing typologies. The second part is dedicated to the conceptual framework (requirements, conceptual model) and to the definition of the preliminary design for the new approach to basic non-conventional sanitary spaces. Finally, a virtual case study (project application) in Italy is presented.
]]>Authors: Sheng Wu Dong-Sheng Jeng
Solute transport through porous media is usually described by well-established conventional transport models with the ability to account for advection, dispersion, and sorption. In this study, we further extend our previous one-dimensional model for solute transport in an unsaturated porous medium to two dimensions. The present model is based on a small-strain approach. The proposed model is validated with previous work. Both homogeneous landfill and pointed landfill conditions are considered. A detailed parametric study shows the differences between the present model and previous one-dimensional model.
]]>Authors: Eli D. Ethridge Bahtiyar Efe Anthony R. Lupo
Many previous studies of the occurrence of blocking anticyclones, their characteristics, and dynamics have defined the onset longitude using the one-dimensional zonal index type criterion proposed by Lejenas and Okland. In addition to examining the blocking event itself, the onset longitude was determined to start at the nearest five degrees longitude using the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalyses that were used to identify the events. In this study, each blocking event in the University of Missouri Blocking Archive was re-examined to identify an onset latitude, and this information was added to the archive. The events were then plotted and displayed on a map of the Northern or Southern Hemisphere using Geographic Information System (GIS) software housed at the University of Missouri as different colored and sized dots according to block intensity and duration, respectively. This allowed for a comparison of blocking events in the archive above to studies that used a two-dimensional index. Then the common onset regions were divided by phase of the El Nino and Southern Oscillation (ENSO), and the typical onset of intense and persistent blocking events could be examined. The results found a favorable comparison between the onset regions identified here and those found in previous studies that used a two-dimensional blocking index. Additionally, there was variability identified in the onset regions of blocking in both hemispheres by ENSO phase, including the location of more intense and persistent events.
]]>Authors: Anuradha Mathrani Jian Wang Ding Li Xuanzhen Zhang
This paper draws upon the United Nations 2022 data report on the achievement of Sustainable Development Goals (SDGs) across the following four dimensions: economic, social, environmental and institutional. Ward’s method was applied to obtain clustering results for forty-five Asian countries to understand their level of progress and overall trends in achieving SDGs. We identified varying degrees of correlation between the four dimensions. The results show that East Asian countries performed poorly in the economic dimension, while some countries in Southeast Asia and Central and West Asia performed relatively well. Regarding social and institutional dimensions, the results indicate that East and Central Asian countries performed relatively better than others. Finally, in the environmental dimension, West and South Asian countries showed better performance than other Asian countries. The insights gathered from this study can inform policymakers of these countries about their own country’s position in achieving SDGs in relation to other Asian countries, as they work towards establishing strategies for improving their sustainable development targets.
]]>Authors: Amar Shukla Rajeev Tiwari Shamik Tiwari
Alzheimer’s Disease (AD) is becoming increasingly prevalent across the globe, and various diagnostic and detection methods have been developed in recent years. Several techniques are available, including Automatic Pipeline Methods and Machine Learning Methods that utilize Biomarker Methods, Fusion, and Registration for multimodality, to pre-process medical scans. The use of automated pipelines and machine learning systems has proven beneficial in accurately identifying AD and its stages, with a success rate of over 95% for single and binary class classifications. However, there are still challenges in multi-class classification, such as distinguishing between AD and MCI, as well as sub-stages of MCI. The research also emphasizes the significance of using multi-modality approaches for effective validation in detecting AD and its stages.
]]>Authors: Pantelis Nikolaidis
Exercise testing has important applications for sport, exercise and clinical settings, providing valuable information for exercise prescription and diagnostics for health purposes. Often, exercise testing includes the participant’s maximal effort, and the testing score partially depends on whether the maximal effort has been exerted. In this context, motivation in exercise testing, including verbal encouragement and video presentation, plays a vital role in assessing participants. Professionals involved in exercise testing, such as exercise physiologists and sport scientists, should be aware of motivation’s role in performance during laboratory or field testing, especially using verbal encouragement. Motivation during exercise testing should be standardized and fully described in testing protocols. In this way, exercise testing would provide valid and reliable results for exercise prescription or other purposes (e.g., sport talent identification, athletes’ selection, education, research and rehabilitation).
]]>Authors: Siegfried Hackel Shanna Schönhals Lutz Doering Thomas Engel Reinhard Baumfalk
This article depicts the role of the Digital Calibration Certificate (DCC) for an end-to-end digital quality infrastructure and as the basis for developments that are designated by the keyword “Industry 4.0”. Furthermore, it describes the impact the DCC has on increasing productivity in the manufacturing of products and in global trade. The DCC project is international in its scope. Calibration certificates document the measurement capability of a measurement system. They do this independently and by providing traceability to measurement standards. Therefore, they do not only play an important role in the world of metrology, but they also make it possible for manufacturing and commercial enterprises to exchange measurement values reliably and correctly at the national and at the international level. Thus, a DCC concept is urgently needed for the end-to-end digitalization of industry for the era of Industry 4.0 and for Medicine 4.0. A DCC brings about important advantages for issuers and for users. The DCC leads to the stringent, end-to-end, traceable and process-oriented organization of manufacturing and trading. Digitalization is thus a key factor in the field of calibration as it enables significant improvements in product and process quality. The reason for this is that the transmission of errors will be prevented, and consequently, costs will be saved as the time needed for distributing and disseminating the DCCs and the respective calibration objects will be reduced. Furthermore, it will no longer be necessary for the test equipment administration staff to update the data manually, which is a time-consuming, tedious and error-prone process.
]]>Authors: Sandeep Pratap Singh Shamik Tiwari
Identity management describes a problem by providing the authorized owners with safe and simple access to information and solutions for specific identification processes. The shortcomings of the unimodal systems have been addressed by the introduction of multimodal biometric systems. The use of multimodal systems has increased the biometric system’s overall recognition rate. A new degree of fusion, known as an intelligent Dual Multimodal Biometric Authentication Scheme, is established in this study. In the proposed work, two multimodal biometric systems are developed by combining three unimodal biometric systems. ECG, sclera, and fingerprint are the unimodal systems selected for this work. The sequential model biometric system is developed using a decision-level fusion based on WOA-ANN. The parallel model biometric system is developed using a score-level fusion based on SSA-DBN. The biometric authentication performs preprocessing, feature extraction, matching, and scoring for each unimodal system. On each biometric attribute, matching scores and individual accuracy are cyphered independently. A matcher performance-based fusion procedure is demonstrated for the three biometric qualities because the matchers on these three traits produce varying values. The two-level fusion technique (score and feature) is implemented separately, and their results with the current scheme are compared to exhibit the optimum model. The suggested plan makes use of the highest TPR, FPR, and accuracy rates.
]]>Authors: Hartmut Hirsch-Kreinsen
This contribution deals with the diffusion of Industry 4.0 technologies and their consequences for work. Additionally, design options for work in Industry 4.0 are discussed. The following are outlined: First, since there are as yet no concrete future prospects for digital work, different development perspectives can be envisioned. Second, the development of Industry 4.0, therefore, has to be regarded as a design project. One theoretical basis for this is the “sociotechnical systems” approach. Third, this approach enables criteria for the design and implementation of human-oriented forms of digitized work to be systematically developed. The empirical basis of this contribution derives from research findings on the implementation of Industry 4.0 technologies and the development of digitized work in German industry. The research results are based on qualitative research methods such as company case studies and expert interviews.
]]>Authors: Shubashini Rathina Velu Vinayakumar Ravi Kayalvily Tabianan
The goal of the work is to enhance existing financial market forecasting frameworks by including an additional factor–in this example, a collection of carefully chosen tweets—into a long-short repetitive neural channel. In order to produce attributes for such a forecast, this research used a unique attitude analysis approach that combined psychological labelling and a valence rating that represented the strength of the sentiment. Both lexicons produced extra properties such 2-level polarization, 3-level polarization, gross reactivity, as well as total valence. The emotional polarity explicitly marked into the database contrasted well with outcomes of the innovative lexicon approach. Plotting the outcomes of each of these concepts against actual market rates of the equities examined has been the concluding step in this analysis. Root Mean Square Error (RMSE), preciseness, as well as Mean Absolute Percentage Error (MAPE) were used to evaluate the results. Across most instances of market forecasting, attaching an additional factor has been proven to reduce the RMSE and increase the precision of forecasts over lengthy sequences.
]]>Authors: Ahmad Yaman Abdin Claus Jacob
During the global Corona pandemic, the validity of science has been challenged by sections of the public, often for political gains [...]
]]>Authors: Philip Q. Yang Michaela LaNay Wilson
A global crisis generated by human-made climate change has added urgency to the need to fully understand human pro-environmental behaviors (PEBs) that may help slow down the crisis. Factors influencing personal and public PEBs may or may not be the same. Only a few studies have empirically investigated the determinants of personal and public PEBs simultaneously, but they contain major limitations with mixed results. This study develops a conceptual model for explaining both personal and public PEBs that incorporate demographic, socioeconomic, political, and attitudinal variables, and their direct and indirect effects. Using the latest available data from the 2010 General Social Survey and structural equation modeling (SEM), we tested the determinants of both personal and public PEBs in the United States. The results reveal that environmental concerns, education, and political orientation demonstrate similar significant impacts on both personal and public PEBs, but income, gender, race, urban/rural residency, region, and party affiliation have differential effects on these behaviors. Age, cohort, and religion have no significant effect on both types of behaviors. Our results confirm some existing findings; however, they challenge the findings of much of the literature.
]]>Authors: Cédric Sueur
Connectomics, which is the network study of connectomes or maps of the nervous system of an organism, should be applied and expanded to human and animal societies, resulting in the birth of the domain of socioconnectomics compared to neuroconnectomics. This new network study framework would open up new perspectives in evolutionary biology and add new elements to theories, such as the social and cultural brain hypotheses. Answering questions about network topology, specialization, and their connections with functionality at one level (i.e., neural or societal) may help in understanding the evolutionary trajectories of these patterns at the other level. Expanding connectomics to societies should be done in comparison and combination with multilevel network studies and the possibility of multiorganization selection processes. The study of neuroconnectomes and socioconnectomes in animals, from simpler to more advanced ones, could lead to a better understanding of social network evolution and the feedback between social complexity and brain complexity.
]]>Authors: Sci Editorial Office Sci Editorial Office
High-quality academic publishing is built on rigorous peer review [...]
]]>Authors: Punya Mainali Phadindra Wagle Chasen McPherson David. N. McIlroy
A signature of synaptic potentiation conductance has been observed in an α-Fe2O3/p-Si device fabricated using spin coating. The conductance of the device in dark conditions and illumination with a white light source was characterized as a function of the application of a periodic bias (voltage) with a triangular profile. The conductance of the device increases with the number of voltage cycles applied and plateaus to its maximum value of 0.70 μS under dark conditions and 12.00 μS under illumination, and this mimics the analog synaptic weight change with the action potential of a neuron. In the range of applied voltage from 0 V to 0.7 V, the conduction mechanism corresponds to trap-assisted tunneling (TAT) and in the range of 0.7–5 V it corresponds to the Poole–Frenkel emission (PFE). The conductance as a function of electrical pulses was fitted with a Hill function, which is a measure of cooperation in biological systems. In this case, it allows one to determine the turn-on threshold (K) of the device in terms of the number of voltage pulses, which are found to be 3 and 166 under dark and illumination conditions, respectively. The gradual conductance change and activation after a certain number of pulses perfectly mimics the synaptic potentiation of neurons. In addition, the threshold parameter extracted from the Hill equation fit, acting as the number of pulses for synaptic activation, is found to have programmability with the intensity of the light illumination.
]]>Authors: Amr A. El-Hanafy Yasser M. Saad Saleh A. Alkarim Hussein A. Almehdar Fuad M. Alzahrani Mohammed A. Almatry Vladimir N. Uversky Elrashdy M. Redwan
With the increasing interest in the identification of differences between camel breeds over the last decade, this study was conducted to estimate the variability of milk production and composition of four Saudi camel breeds during different seasons. Milk records were taken two days per week from females of Majahem, Safra, Wadha, and Hamra breeds distributed over Saudi Arabia. The milk yield during winter indicated that the weekly average of the Wadha breed was significantly lower (27.13 kg/week) than Majahem and Hamra breeds. The Safra breed had the lowest milk yield (30.7 kg/week) during summer. During winter, the Hamra breed had a lower content of all analyzed milk components except proteins and was characterized by a lower pH than the milk of the other breeds. However, the Hamra breed had significantly higher contents of milk fat and lactose than the other breeds during summer, with the corresponding values of 3.87 and 4.86%, respectively. Milk collected during winter from Majahem, Safra, and Wadha breeds was characterized by a significant increase in all milk components and milk pH. Finally, the isoelectric focusing analysis revealed noticeable variability of casein purified from camel milk within the different Saudi breeds, with the highest significant value of 2.29 g per 100 mL recorded for the Wadha breed.
]]>Authors: Caio Wolf Klein Jéssica Kuntz Maykot Enedir Ghisi Liseane Padilha Thives
The objective of this study was to carry out the financial feasibility analysis of harvesting rainwater from permeable pavements in a city square. A case study was carried out in a square close to the beach in the city of Florianópolis, Brazil. Questionnaires were applied to pedestrians who circulate within the area. The square is to be implemented to promote sustainability and improve the user’s quality of life. From the rainfall data and the average daily water demand for irrigation of the square vegetation, the volume of rainwater to be harvested from the permeable pavement was calculated. The rainwater demand was estimated as 662 L/day. The implementation and operation costs of the pavement and irrigation systems were evaluated. The potential for potable water savings was 89.8%. The payback period was estimated as 347 months. This study showed that rainwater collected from permeable pavements is financially feasible and represents a promising technique.
]]>Authors: Demetris Koutsoyiannis
Whilst several methods exist to provide sample estimates of the probability distribution function at several points, for the probability density of continuous stochastic variables, only a gross representation through the histogram is typically used. It is shown that the newly introduced concept of knowable moments (K-moments) can provide smooth empirical representations of the distribution function, which in turn can yield point and interval estimates of the density function at a large number of points or even at any arbitrary point within the range of the available observations. The proposed framework is simple to apply and is illustrated with several applications for a variety of distribution functions.
]]>Authors: Christoph-Alexander Holst Volker Lohweg
Technical systems generate an increasing amount of data as integrated sensors become more available. Even so, data are still often scarce because of technical limitations of sensors, an expensive labelling process, or rare concepts, such as machine faults, which are hard to capture. Data scarcity leads to incomplete information about a concept of interest. This contribution details causes and effects of scarce data in technical systems. To this end, a typology is introduced which defines different types of incompleteness. Based on this, machine learning and information fusion methods are presented and discussed that are specifically designed to deal with scarce data. The paper closes with a motivation and a call for further research efforts into a combination of machine learning and information fusion.
]]>Authors: Juan A. Conesa Eugenio Tomás
In this work, briquettes from mattress waste are manufactured and the acoustic properties of the materials produced are checked. Briquettes are made at temperatures between 170 and 185 °C using waste from viscoelastic memory foam (VMF) and applying pressures between 25 and 75 MPa. Later, the properties of the materials such as their bulk density, porosity, and compaction factor are measured. Afterwards, the materials are subjected to a test to determine the sound reduction index at different frequencies. This is completed with a home-made system in which the acoustic signal is compared in the presence and absence of the mattress briquettes using MATLAB® software (Mathworks, Natick, MA, USA) for signal computing. The results are also compared with a reference acoustic insulation material. The runs show that the materials produced from mattress waste are able to reduce the intensity of sound in a similar way to commercial materials. In fact, reduction indices with prepared briquettes are much higher in the frequencies that most affect the human ear, compared to a reference insulating material.
]]>Authors: Kai Lucks
This paper addresses the management of digital–informational transformation of industrial enterprises. Any transformation requires the coordinated action of several independent actors. Similarly, the digital–informational transformation required for the fourth industrial revolution (i.e., Industry 4.0) requires the involvement of multiple actors from the public and private sectors. This applies to an individual company as well as to the entire sector, regardless of the desired level of transformation. The increasing dissolution of boundaries between industrial and non-industrial actors is therefore essential for Industry 4.0. This paper addresses the above dissolution activities, focusing on cross-company networks and management issues. The management aspects of the following factors are examined: culture change, strategies, degree of digitalization, degree of networking, Internet of Things, digital ecosystems, human resources, organizational development, hierarchies, cross-functional collaboration, cost drivers, innovation pressures, supply chains, enterprise resource planning systems and corporate acquisitions/mergers. Based on the findings on the above factors, a management-driven model of the “transformation to Industry 4.0” for manufacturing companies is presented and discussed. This work thus complements the existing literature on Industry 4.0, as the majority of the literature on Industry 4.0 deals with technical problem solving at the field level.
]]>Authors: Anirudh Apparaju Ognjen Arandjelović
Artificial neural networks in their various different forms convincingly dominate machine learning of the present day. Nevertheless, the manner in which these networks are trained, in particular by using end-to-end backpropagation, presents a major limitation in practice and hampers research, and raises questions with regard to the very fundamentals of the learning algorithm design. Motivated by these challenges and the contrast between the phenomenology of biological (natural) neural networks that artificial ones are inspired by and the learning processes underlying the former, there has been an increasing amount of research on the design of biologically plausible means of training artificial neural networks. In this paper we (i) describe a biologically plausible learning method that takes advantage of various biological processes, such as Hebbian synaptic plasticity, and includes both supervised and unsupervised elements, (ii) conduct a series of experiments aimed at elucidating the advantages and disadvantages of the described biologically plausible learning as compared with end-to-end backpropagation, and (iii) discuss the findings which should serve as a means of illuminating the algorithmic fundamentals of interest and directing future research. Among our findings is the greater resilience of biologically plausible learning to data scarcity, which conforms to our expectations, but also its lesser robustness to additive, zero mean Gaussian noise.
]]>Authors: David Michael O’Sullivan Sukbum Kim Jeheon Moon Sungmin Kim
Physical activity is a crucial factor for maintaining not only physical health status, but vast amounts of research have shown its link with better mental health. Supporting the use of gyms for the safety of its practitioners is vital in the new norm and living with COVID-19. Therefore, in this study we show research supporting the development of a framework for a Total Safe-Care Fitness Solution based on a multimodal COVID-19 tracking system integrating computer vision and data from wearable sensors. We propose a framework with three areas that need to be integrated: a COVID-19 vaccine and health status recognition system (QR code scan prior to entry to the gym, and physiological signals monitored by a smart-band and a health questionnaire filled in prior to entry to the gym); an accident detection system (video and smart-band based); and a gym-user digital tracking system (CCTV and smart-band based). We show the proposed architecture for the integration of these systems and provide practical tips on how to implement it in testbeds for feasibility testing. To the best of our knowledge, this is the first proposed COVID-19 tracking system of use in gyms that includes a predictive model for accident detection for safer exercise participation through health monitoring.
]]>Authors: Alec Feinberg
The goal of this paper is to provide an initial assessment of water-vapor feedback (WVF) in humid urban heat island (UHI) environments based on temperature difference data. To achieve this, a novel temperature difference WVF model was developed that can analyze global and UHI local temperature difference data. Specifically, the model was applied to a comparative temperature literature study of similar cities located in humid versus dry climates. The literature study found that the daytime UHI ΔT was observed to be 3.3 K higher in humid compared to dry climates when averaged over thirty-nine cities. Since the direct measurement of WVF in UHI areas could prove challenging due to variations in the temperature lapse rates from tall buildings, modeling provides an opportunity to make a preliminary assessment where measurements may be difficult. Thus, the results provide the first available UHI ΔT WVF model assessment. The preliminary results find local water-vapor feedback values for wet-biased cities of 3.1 Wm−2K−1, 3.4 Wm−2K−1, and 4 Wm−2K−1 for 5 °C, 15 °C, and 30 °C UHI average temperatures, respectively. The temperature difference model could also be used to reproduce literature values. This capability helps to validate the model and its findings. Heatwave assessments are also discussed, as they are strongly affected by UHI water-vapor feedback and support the observation that humid regions amplify heat higher than UHIs in dry regions, exacerbating heatwave problems. Furthermore, recent studies have found that urbanization contributions to global warming more than previously anticipated. Therefore, cities in humid environments are likely larger contributors to such warming trends compared to cities in dry environments. These preliminary modeling results show concern for a strong local UHI water-vapor feedback issue for cities in humid environments, with results possibly over a factor of two higher than the global average. This assessment also indicates that albedo management would likely be an effective way to reduce the resulting WVF temperature increase.
]]>Authors: Ayla Roberta Borges Serra Thiago Rui Casagrande Juliana Fonseca de Lima Marcelo Firmino de Oliveira Severino Alves Júnior Marcos de Oliveira Junior Osvaldo Antonio Serra
Hydrogels based on mixed zirconium/europium ions and benzene tricarboxylic acid were synthesized by hydrothermal reaction. A solid glass-like material is formed upon drying, showing strong reddish luminescence. The system was characterized by solid-state nuclear magnetic resonance, thermal analyses, and infrared spectroscopy. The results reveal the amorphous character of the structure and the presence of at least four types of binding modes between the metal oxide clusters and benzene tricarboxylic acid. On the other hand, thermogravimetric analysis (TGA) showed high thermal stability, with the material decomposing at temperatures higher than 500 °C. The combination of intense Eu3+ luminescence with large thermal stability makes this material a strong candidate for application as a luminescent red marker for gunshot residue (GSR). As proof of concept, we show the feasibility of this application by performing shooting tests using our compound as a GSR marker. After the shots, the residual luminescent particles could be visualized in the triggered cartridge, inner the muzzle of the firearm, and a lower amount on the hands of the shooter, using a UV lamp (λ = 254 nm). Remarkably, our results also show that the Eu3+ emission for the GSR is very similar to that observed for the original solid material. These characteristics are of huge importance since they provide a chance to use smaller amounts of the marker in the ammunition, lowering the costs of potential industrial manufacturing processes.
]]>Authors: Dioneia Monte-Serrat Carlo Cattani
The semantic web invests in systems that work collaboratively. In this article we show that the collaborative way is not enough, because the system must ‘understand’ the data resources that are provided to it, to organize them in the direction indicated by the system’s core, the algorithm. In order for intelligent systems to imitate human cognition, in addition to technical skills to model algorithms, we show that the specialist needs a good knowledge of the principles that explain how human language constructs concepts. The content of this article focuses on the principles of the conceptual formation of language, pointing to aspects related to the environment, to logical reasoning and to the recursive process. We used the strategy of superimposing the dynamics of human cognition and intelligent systems to open new frontiers regarding the formation of concepts by human cognition. The dynamic aspect of the recursion of the human linguistic process integrates visual, auditory, tactile input stimuli, among others, to the central nervous system, where meaning is constructed. We conclude that the human linguistic process involves axiomatic (contextual/biological) and logical principles, and that the dynamics of the relationship between them takes place through recursive structures, which guarantee the construction of meanings through long-range correlation under scale invariance. Recursion and cognition are, therefore, interdependent elements of the linguistic process, making it a set of sui generis structures that evidence that the essence of language, whether natural or artificial, is a form and not a substance.
]]>Authors: Asif Raza Raghuram Kandimalla Sanjeeb Kalita Siddhartha Sankar Ghosh
Artesunate (ART), a plant based semi-synthetic antimalarial drug, is emerging as a new class of effective cancer chemotherapeutics. However, the dosage of ART required to have an anti-cancer effect on cancer cells is greater than that needed to exterminate malarial parasites. The goal of this study was to develop an effective combination therapy to reduce the dose-dependent side effects of ART both in vitro and in vivo. In our study, 4-phenylbutyrate (4-PB), a histone deacetylase inhibitor (HDAC), exhibited significant synergistic induction of apoptosis in MCF-7 cells in combination with ART. The IC50 of ART decreased significantly from 55.56 ± 5.21 µM to 24.71 ± 3.44 µM in MCF-7 cells. ART treatment increased cellular oxidative stress, and the resulting generation of intracellular reactive oxygen species (ROS) caused extensive DNA damage in the cell. The extent of ROS production and cell cycle arrest were further enhanced by 4-PB treatment. In further investigation, we found that 4-PB attenuated mRNA expression of crucial DNA damage response (DDR) elements of the nonhomologous end-joining (NHEJ) pathway, consequently enhancing the DNA damaging effect of ART. Furthermore, the combination therapy resulted in improvement in the life expectancy of the treated mice and a prominent reduction in tumour volume without interfering with the normal biochemical, haematological and histological parameters of the mice. Overall, our study revealed a novel combination therapy in which 4-PB potentiated the cytotoxicity of ART synergistically and provided a promising combination drug for effective cancer therapy.
]]>Authors: Najah F. Ghalyan Asok Ray William Kenneth Jenkins
Functional analysis is a well-developed field in the discipline of Mathematics, which provides unifying frameworks for solving many problems in applied sciences and engineering. In particular, several important topics (e.g., spectrum estimation, linear prediction, and wavelet analysis) in signal processing had been initiated and developed through collaborative efforts of engineers and mathematicians who used results from Hilbert spaces, Hardy spaces, weak topology, and other topics of functional analysis to establish essential analytical structures for many subfields in signal processing. This paper presents a concise tutorial for understanding the theoretical concepts of the essential elements in functional analysis, which form a mathematical framework and backbone for central topics in signal processing, specifically statistical and adaptive signal processing. The applications of these concepts for formulating and analyzing signal processing problems may often be difficult for researchers in applied sciences and engineering, who are not adequately familiar with the terminology and concepts of functional analysis. Moreover, these concepts are not often explained in sufficient details in the signal processing literature; on the other hand, they are well-studied in textbooks on functional analysis, yet without emphasizing the perspectives of signal processing applications. Therefore, the process of assimilating the ensemble of pertinent information on functional analysis and explaining their relevance to signal processing applications should have significant importance and utility to the professional communities of applied sciences and engineering. The information, presented in this paper, is intended to provide an adequate mathematical background with a unifying concept for apparently diverse topics in signal processing. The main objectives of this paper from the above perspectives are summarized below: (1) Assimilation of the essential information from different sources of functional analysis literature, which are relevant to developing the theory and applications of signal processing. (2) Description of the underlying concepts in a way that is accessible to non-specialists in functional analysis (e.g., those with bachelor-level or first-year graduate-level training in signal processing and mathematics). (3) Signal-processing-based interpretation of functional-analytic concepts and their concise presentation in a tutorial format.
]]>Authors: Christian Koldewey Daniela Hobscheidt Christoph Pierenkemper Arno Kühn Roman Dumitrescu
Industry 4.0 is one of the most influential trends in manufacturing as of now. Coined as the fourth industrial revolution it promises to overthrow entrenched structures opening new pathways for innovation and value creation. Like all revolutions, it is accompanied by disruption and uncertainty. Consequently, many manufacturing companies struggle to adopt an Industry 4.0 perspective that benefits their performance. Hence, our goal was to develop a method for increasing firm performance through Industry 4.0. A key factor was to focus on the entire company as a socio-technical system to depict the numerous interactions between people, technology, and business/organization. To realize the method, we combined consortium research, design science, and method engineering. We gathered comprehensive data from workshops, interviews, and five case studies, which we used to develop the method. It consists of four phases: a maturity model to determine the status quo, a procedure to derive a target position, a pattern-based approach to design the socio-technical system, and a procedure to define a transformation setup. Our approach is the first to combine maturity models with foresight and extensive prescriptive knowledge. For practitioners, the method gives orientation for the future-oriented planning of their transformation processes.
]]>Authors: Gerhard Litscher
Helmet designs have not only been used successfully in integrative medicine for decades in acupuncture research, but they are also increasingly being used in the field of transcranial photobiomodulation (TPBM), primarily in so-called mental diseases. The author of this article has been dealing with developed helmet constructions for neuromonitoring for over 25 years and not only gives an overview of the development of these methods, but also shows new methods and perspectives. The future of this branch of research certainly lies in the development of so-called sensor-controlled therapy helmets for TPBM.
]]>Authors: Tosin Adewumi Foteini Liwicki Marcus Liwicki
In this study, we demonstrate that an open-domain conversational system trained on idioms or figurative language generates more fitting responses to prompts containing idioms. Idioms are a part of everyday speech in many languages and across many cultures, but they pose a great challenge for many natural language processing (NLP) systems that involve tasks such as information retrieval (IR), machine translation (MT), and conversational artificial intelligence (AI). We utilized the Potential Idiomatic Expression (PIE)-English idiom corpus for the two tasks that we investigated: classification and conversation generation. We achieved a state-of-the-art (SoTA) result of a 98% macro F1 score on the classification task by using the SoTA T5 model. We experimented with three instances of the SoTA dialogue model—the Dialogue Generative Pre-trained Transformer (DialoGPT)—for conversation generation. Their performances were evaluated by using the automatic metric, perplexity, and a human evaluation. The results showed that the model trained on the idiom corpus generated more fitting responses to prompts containing idioms 71.9% of the time in comparison with a similar model that was not trained on the idiom corpus. We have contributed the model checkpoint/demo/code to the HuggingFace hub for public access.
]]>Authors: Josef Vogt Katrin Rosenthal
The common method for producing casting molds for the fabrication of polydimethylsiloxane (PDMS) chips is standard photolithography. This technique offers high resolution from hundreds of nanometers to a few micrometers. However, this mold fabrication method is costly, time-consuming, and might require clean room facilities. Additionally, there is a need for non-micromechanics experts, who do not have specialized equipment to easily and quickly prototype chips themselves. Simple, so-called, makerspace technologies are increasingly being explored as alternatives that have potential to enable anyone to fabricate microfluidic structures. We therefore tested simple fabrication methods for a PDMS-based microfluidic device. On the one hand, channels were replicated from capillaries and tape. On the other hand, different mold fabrication methods, namely laser cutting, fused layer 3D printing, stereolithographic 3D printing, and computer numerical control (CNC) milling, were validated in terms of machine accuracy and tightness. Most of these methods are already known, but the incorporation and retention of particles with sizes in the micrometer range have been less investigated. We therefore tested two different types of particles, which are actually common carriers for the immobilization of enzymes, so that the resulting reactor could ultimately be used as a microfluidic bioreactor. Furthermore, CNC milling provide the most reliable casting mold fabrication method. After some optimization steps with regard to manufacturing settings and post-processing polishing, the chips were tested for the retention of two different particle types (spherical and non-spherical particles). In this way, we successfully tested the obtained PDMS-based microfluidic chips for their potential applicability as (bio)reactors with enzyme immobilization carrier beads.
]]>Authors: Cláudia Campos Pessoa Fernando C. Lidon Diana Daccak Inês Carmo Luís Ana Coelho Marques Ana Rita F. Coelho Paulo Legoinha José Cochicho Ramalho António E. Leitão Mauro Guerra Roberta G. Leitão Paula Scotti Campos Isabel P. Pais Maria Manuela Silva Fernando H. Reboredo Maria Fernanda Pessoa Manuela Simões
Following an agronomic approach for the Ca enrichment of Rocha pears, this study aimed to assess the interactions between mineral nutrients in fruit tissues at harvest and after storage for 5 months and to characterize the implications on the profile of sugars and fatty acids (FA). A total of seven foliar sprays (with concentrations of 0.1–0.6 kg·ha−1 Ca(NO3)2 and 0.8–8 kg·ha−1 CaCl2) were applied to pear trees. After harvest, the fruits were stored for 5 months, in environmentally controlled chambers, and the mineral contents in five regions (on the equatorial section) of the fruits were assessed, while the sugar and FA content were quantified. For both dates, all foliar sprayed treatments, at different extends, increased Ca content in the center and near the epidermis of Rocha pear fruits and the levels of K, Mn, Fe, Zn and Cu also varied. At harvest, the Ca treatments did not affect the levels of sucrose, glucose, fructose and sorbitol and, after storage, their concentrations remained higher in Ca-treated fruits. Additionally, the tendency of the relative proportions of FA was C18:2 > C18:1 > C16:0 > C18:3 > C18:0 > chains inferior to 16 C (<16:0), but after storage it was C18:2 > C16:0 > C18:3 > C18:0 > C18:1 > chains inferior to 16 C (<16:0). It is concluded that the heterogeneous distribution of Ca in the tissues of Rocha pear fruits results from its absorption in the peel after Ca(NO3)2 and CaCl2 sprays and from the xylemic flux in the core prior to maturity. Additionally, the hydrolysis of complex polysaccharides affects the contents of simpler sugars during maturation, ripening and senescence, while storage decreases the amount of total fatty acids (TFA), but the double bond index (DBI) indicate that cell membrane fluidity remains unaffected.
]]>Authors: Abdelkarim Alhloul Eva Kiss
The latest technological development called Industry 4.0, like the previous industrial revolutions, has also brought a new challenge for people as a labor force because new technologies require new skills and competencies. By 2030 the existing generation in the labor market will have a skill gap threatening human replacement by machines. Based on bibliometric analysis and systematic literature review the main aims of this study are, on the one hand, to reveal the most related articles concerning skills, competencies, and Industry 4.0, and on the other hand, to identify the newset of skills and competencies which are essential for the future labor force. Determining the model of new skills and competencies in connection with Industry 4.0 technologies is the main novelty of the study. A survey carried out among the workers of mostly multinational organisations in Hungary has also been used to explore the level of awareness about those skills and Industry 4.0 related technologies, and this can be considered the significance of the empirical research.
]]>Authors: Emmanuel Obeng-Gyasi
This commentary is an investigation of sources of lead (Pb) exposure in West Africa. Pb is generally acknowledged as one of the most widespread environmental health hazards in West Africa, and there is heightened concern over adverse health effects at various levels of exposure (at doses once considered safe) in the West African region. A literature review for the possible health implications of Pb exposure on human health showed nervous system dysfunction, anemia, and potential cognitive diseases as the major health issues among children, while adults were found to suffer more from cardiovascular dysfunction, neurological decline, and reproductive diseases. Despite a decline in blood lead levels (BLLs), lead exposure continues to be a major public health concern as no level of Pb exposure can be considered safe. Moreover, lowering BLLs entails identifying various lead sources such as gasoline emissions, leaded paint, canned foods, and beverages, as well as plausible biological pathways of lead exposure and response. However, only countries such as Nigeria and Ghana have extensive research available regarding the different sources of Pb exposure. Further, it is not apparent which country is affected the most by Pb exposure. Therefore, this commentary was aimed to explore different literature sources to describe and list the different sources of Pb exposure in 15 West African countries. The findings indicated water, food, and occupational exposure as the major sources of Pb exposure in the region. People with occupations such as e-waste and Pb acid battery recycling, auto mechanics, fuel attending, welding, electronic repairing, farming/spraying, and mining were found to be at immediate risk. Tobacco, spices and paints constituted additional potential sources of exposure. For residents living near landfills or urban area, the major sources of Pb exposure were soil, air, and dust particles. The review revealed a vast research gap on the sources and implications of Pb exposure. Exposure to Pb could further increase due to uncontrolled traffic, urban growth, inadequate urban planning, and the inadequate enforcement of regulations. Therefore, more extensive research on the changing trends of Pb exposure among West African populations is needed.
]]>Authors: David Quiñonero Antonio Frontera
We report high-level ab initio calculations (CCSD(T)(full)/CBS//SCS-RI-MP2(full)/aug-cc-pwCVTZ) that demonstrate the importance of cooperativity effects when Anion–π and CH/π interactions are simultaneously established with benzene as the π-system. In fact, most of the complexes exhibit high cooperativity energies that range from 17% to 25.3% of the total interaction energy, which is indicative of the strong influence of the CH/π on the Anion–π interaction and vice versa. Moreover, the symmetry-adapted perturbation theory (SAPT) partition scheme was used to study the different energy contributions to the interaction energies and to investigate the physical nature of the interplay between both interactions. Furthermore, the Atoms in Molecules (AIM) theory and the Non-Covalent Interaction (NCI) approach were used to analyze the two interactions further. Finally, a few examples from the Protein Data Bank (PDB) are shown. All results stress that the concurrent formation of both interactions may play an important role in biological systems due to the ubiquity of CH bonds, phenyl rings, and anions in biomolecules.
]]>Authors: Johannes Winter Anna Frey Jan Biehler
Considering the first ten years of Industrie 4.0 in Germany—the digital transformation of industry towards the goal of increased manufacturing productivity and mass customization—significant progress has been achieved. However, future efforts are required. This review first evaluates the status quo of implementation and research in Germany and finds that large-scale companies have proceeded faster than small- and middle-sized enterprises. Currently, regardless of their size, companies have in common a shortage of qualified specialists, coupled with a lack of adequate base technologies for Industrie 4.0 and an insufficient digital mindset. The creation of platform-based digital business models is particularly lagging behind, despite high research interest. This review subsequently identifies three research-driven fields of action that are particularly important for the future of Industrie 4.0: (1) resilience of value networks in the strategic area of sovereignty, (2) Open-Source as a driver for the strategic area of interoperability, and (3) the strategic combination of digitalization and sustainability as a basis for sustainable business models in the strategic area of sustainability.
]]>Authors: Akhil Mahajan Harbinder Singh Amandeep Singh Devendra K. Agrawal Amandeep Arora Tejpal Singh Chundawat
A series of new trifluoromethyl-substituted quinolones and hydantoin hybrids has been synthesized and evaluated against Gram-positive bacterium (Staphylococcus aureus MTCC 96) and Gram-negative bacteria (Pseudomonas aeruginosa MTCC 441, Klebsiella pneumonia MTCC 109, and Escherichia coli MTCC 442). Compound  19c, having the 6-propene group on the quinolone ring, showed similar activity to a standard drug (chloramphenicol) by exhibiting MIC values of 50 µg/mL against S. aureus and P. aeruginosa. Physicochemical properties of compound 19c were also determined, which were in line with Lipinski’s rule of five, suggesting the suitability of compound 19c in biological systems. Various types of binding interactions of 19c within the active site of DNA gyrase of S. aureus were also streamlined by molecular docking studies, suggesting its capability to block the catalytic process of the DNA gyrase, which could be the possible reason for its antibacterial potential.
]]>Authors: Tobias Harland Christian Hocken Tobias Schröer Volker Stich
Data-driven transparency in end-to-end operations in real-time is seen as a key benefit of the fourth industrial revolution. In the context of a factory, it enables fast and precise diagnoses and corrections of deviations and, thus, contributes to the idea of an agile enterprise. Since a factory is a complex socio-technical system, multiple technical, organizational and cultural capabilities need to be established and aligned. In recent studies, the underlying broad accessibility of data and corresponding analytics tools are called “data democratization”. In this study, we examine the status quo of the relevant capabilities for data democratization in the manufacturing industry. (1) and outline the way forward. (2) The insights are based on 259 studies on the digital maturity of factories from multiple industries and regions of the world using the acatech Industrie 4.0 Maturity Index as a framework. For this work, a subset of the data was selected. (3) As a result, the examined factories show a lack of capabilities across all dimensions of the framework (IT systems, resources, organizational structure, culture). (4) Thus, we conclude that the outlined implementation approach needs to comprise the technical backbone for a data pipeline as well as capability building and an organizational transformation.
]]>Authors: Kuan-Lun Lee Andrea Roesinger Uwe Hommel
Industrie 4.0 has stirred turbulences in China since its birth in 2011. The struggles of the Chinese manufacturing enterprises towards realizing and adapting Industrie 4.0 in their production processes have given us many new perceptions. These insights and findings can in turn serve as inputs for academics and policy makers to structure or fine tune the development of the next generation of Industrie 4.0. The authors of this paper summarize the knowledge and understandings from their personal engagement assisting the Chinese manufacturing enterprises with digitalization in their production processes. A real-life example shows how a typical Chinese mid-size manufacturing enterprise ended up with new business models when they started out the digitalization journey with a simple goal to increase efficiency. We conclude that the Chinese market will continue to be relevant for the future development of Industrie 4.0.
]]>Authors: Tao Yang Lijuan He Hailin Wang Chengjie Gao Hongling Yang
Fugitive dust particles are important contributors to urban ambient particulate matter (PM), while their emissions have been ignored or greatly underestimated in previous studies, leading to the underestimation of PM concentrations and health impacts. Thus, studying the morphology of fugitive dust, taking appropriate dust-suppression measures, and evaluating dust-suppression effects are crucial to the prevention and control of fugitive dust. In this study, we investigated the morphology and composition of dust particles from different dust sources, including bare land, stock dump, construction, and road dust. Afterwards, different dust-suppression measures including fence interception nets, bare ground mesh nets, and road dust-suppressants were undertaken to simulate and analyze their dust-suppression effects. Finally, the height concentration profiling method was used to comprehensively evaluate the on-site dust-suppression effect, which can not only accurately evaluate the dust-suppression effect, but also predict the dust-suppression ability in a wide range. Gaining insights into the morphology and composition of dust from representative sources is an important step forward to prevent and control fugitive dust, and selecting an appropriate dust-suppression effect evaluation method will provide a beneficial guide for effectively controlling PM pollution in the future.
]]>Authors: Henning Kagermann Wolfgang Wahlster
A decade after its introduction, Industrie 4.0 has been established globally as the dominant paradigm for the digital transformation of the manufacturing industry. Amalgamating research-based results and practical experience from the German industry, this contribution reviews the progress made in implementing Industrie 4.0 and identifies future fields of action from a technological and application-oriented perspective. Putting the human in the center, Industrie 4.0 is the basis for data-based value creation, innovative business models, and agile forms of organization. Today, in the German manufacturing industry, the Internet of Things and cyber–physical production systems are a reality in newly built factories, and the connectivity of machinery has been significantly increased in existing factories. Now, the trends of industrial AI, edge computing up to the edge cloud, 5G in the factory, team robotics, autonomous intralogistics systems, and trustworthy data infrastructures must be leveraged to strengthen resilience, sovereignty, semantic interoperability, and sustainability. This enables the creation of digital innovation ecosystems that ensure long-term adaptability in a volatile economic and geopolitical environment. In sum, this review represents a comprehensive assessment of the status quo and identifies what is needed in the future to reap the rewards of the groundwork done in the first ten years of Industrie 4.0.
]]>Authors: Xiaodan Zhang Jun Chang Xiaohua Yao Jingru Wang Jiatian Zhang Yang Yang Shuiping Yang Kailiang Wang Huadong Ren
As woody oil crop, pecan [Carya illinoinensis (Wangenh.) K. Koch] may be a solution to the shortage of edible oil in the future. In this study, fruit traits, kernel nutrition and fatty acid composition of 10 pecan varieties were determined to assess the potential of pecans for exploitation as edible oil, as well as to further screen varieties that could be used as edible oil resources and to understand their development prospects for cultivation in mountainous hills. The study showed that all the fruit trait indicators measured, including green-fruit weight (mean 28.47 g), nut weight (10.33 g), kernel weight (5.25 g), nut percentage (36.83%) and kernel percentage (50.50%), showed highly significant differences among the 10 varieties. Among the main nutritional indicators of the kernels, the crude fat content was stable (mean 70.01%) with non-significant differences, while protein (67.50 mg·g−1), soluble sugar (10.7 mg·g−1) and tannin (6.07 mg·g−1) showed highly significant differences between varieties. The oil percentage of nuts (kernel percentage * crude fat) averaged 35.36%, with highly significant differences between varieties. The fatty acid composition was dominated by unsaturated fatty acids (mean 91.82%), with unsaturated fatty acids being 11.24 times more abundant than saturated fatty acids. Among the monounsaturated fatty acids, oleic acid was the highest (mean 70.02%), with highly significant differences between varieties, followed by cis-11-eicosanoic acid (0.25%), with non-significant differences between varieties; among the polyunsaturated fatty acids, linoleic acid was the highest (19.58%), followed by linolenic acid (0.97%), both of which showed highly significant differences between varieties; monounsaturated fatty acids were 2.42 times more abundant than polyunsaturated fatty acids. Compared to other oilseed crops, pecan has the potential to produce “nutritious, healthy and stable” edible oil, while its wide habitat and good productivity benefits offer broad prospects for development in the hills and mountains of subtropical China.
]]>Authors: Halina Szatylowicz Paweł A. Wieczorkiewicz Tadeusz M. Krygowski
Aromaticity, a very important term in organic chemistry, has never been defined unambiguously. Various ways to describe it come from different phenomena that have been experimentally observed. The most important examples related to some theoretical concepts are presented here.
]]>Authors: Ross Greenwood Anthony Aguirre
Everlasting inflation has far-reaching implications for cosmology and the standing of self-localizing inferences made by observers, which have been subjects of renewed interest in light of the growing acceptance of theory landscapes and the attendant anthropic arguments. Under what assumptions and to what extent does inflation generically produce an eternal “multiverse,” without fine-tuning with respect to measures over the space of inflationary cosmologies driven by a single minimally coupled scalar field? We address this and related questions with numerical simulations of inflationary dynamics across populations of randomly generated inflation models, instantiating a few particular simply-defined measures.
]]>Authors: Kiernan Foster Brooke Hillman Vahab Rajaei Kimsorn Seng Sarah Maurer
One of the challenges in understanding chemical evolution is the large number of starting organics and environments that were plausible on early Earth. Starting with realistic organic mixtures and using chemical analyses that are not biologically biased, understanding the interplay between organic composition and environment can be approached using statistical analysis. In this work, a mixture of 73 organics was cycled through dehydrating conditions five times, considering environmental parameters of pH, salinity, and rehydration solution. Products were analyzed by HPLC, amide and ester assays, and phosphatase and esterase assays. While all environmental factors were found to influence chemical evolution, salinity was found to play a large role in the evolution of these mixtures, with samples diverging at very high sea salt concentrations. This framework should be expanded and formalized to improve our understanding of abiogenesis.
]]>Authors: Konstantina Tsami Christina Barda George Ladopoulos Nikos Asoutis Didaras Maria-Eleni Grafakou Jörg Heilmann Dimitris Mossialos Michail Christou Rallis Helen Skaltsa
Within the large family of Dioscoreaceae, Dioscorea communis (L.) Caddick & Wilkin (syn. Tamus communis L.) is considered among the four most widespread representatives in Europe, and it is commonly known under the name black bryony or bryonia. To date, reports have revealed several chemical components from the leaves and tubers of this plant. Nevertheless, an extensive phytochemical investigation has not been performed on its berry juice. In the present study, metabolite profiling procedures, using LC-MS, GC-MS, and NMR approaches, were applied to investigate the chemical profile of the D. communis berries. This work reveals the presence of several metabolites belonging to different phytochemical groups, such as fatty acid esters, alkylamides, phenolic derivatives, and organic acids, with lactic acid being predominant. In parallel, based on orally transmitted traditional uses, the initial extract and selected fractions were tested in vitro for their antibacterial effects and exhibited good activity against two bacterial strains related to skin infections: methicillin-resistant Staphylococcus aureus and Cutibacterium acnes. The MIC and MBC values of the extract were determined at 1.56% w/v against both bacteria. The results of this study provide important information on the chemical characterization of the D. communis berry juice, unveiling the presence of 71 metabolites, which might contribute to and further explain its specific antibacterial activity and its occasional toxicity.
]]>Authors: Gregory P. Conners
Background: Management of the child who has swallowed a foreign body should be guided by the risk of complication. Objective of the Review: This review discusses the patient and foreign body characteristics most likely to be associated with complications. Discussion: Most swallowed foreign bodies will pass through the GI tract without complication. Children with pre-existing GI tract abnormalities of any sort, or those who swallow higher-risk foreign bodies, are at higher risk. Higher-risk foreign bodies include long, sharp, or pointed objects, button batteries, and small magnets. Nearly any child who presents to an Emergency Department or other acute care setting after foreign body ingestion should undergo plain radiography; other forms of imaging may also be appropriate. Primary care providers may opt for an initial observation period when there is lower risk of complication. Esophageal button batteries should be emergently removed; other esophageal objects should be promptly removed or, if low risk, allowed a brief period to pass spontaneously. Most lower GI tract foreign bodies will pass spontaneously. Prevention, while not always possible, is preferable to management of foreign body ingestion. Conclusions: Management strategies for children who have swallowed foreign bodies can be optimized by considering relevant patient and foreign body factors, and how they contribute to the risk of complication.
]]>Authors: Florin-Felix Nichita
The Special Issue “Mathematics and Poetry, with a View towards Machine Learning”, started with three guiding questions from the guest editor:What are the similarities (and differences) between mathematical problems and poetic works [...]
]]>Authors: Giuseppe Cantisani Giulia Del Serrone Paolo Peluso
In recent years, innovative progress in information and communication technology (ICT) has introduced new sources for traffic data collection and analysis. On-board sensors like GPS-GPRS boxes, generally installed for insurance purposes, communicate information from circulating vehicles to data centers. Geographic location, date and time, vehicles’ speed and direction, are systematically transmitted and stored as Historical Car Data (HCD) from probe vehicles in the traffic stream. These databases provide a good opportunity to analyze the vehicles’ motion both in the temporal and spatial domains. The aim of this study is to pay attention to the reliability of this kind of data gathering. Since instrumented vehicles account for a small percentage of the entire vehicle fleet, it is important to understand if they can be considered as a sample representative of the whole population. The paper presents a comparison of speed data obtained from HCD with the ones recorded by inductive-loop detectors and microwave radar sensors; the performed analysis required the definition of specific methodologies and procedures. The obtained results show a high correspondence between the two sets of data. Therefore, HCD can be proposed for the detailed monitoring of, and studies on, the operating conditions of mobility along road networks.
]]>Authors: Sławomir J. Grabowski
Complexes linked by various interactions are analysed in this study. They are characterized by the tetrahedral configuration of the Lewis acid centre. Interactions, being a subject of this study, are classified as σ-hole bonds, such as the halogen, chalcogen, pnicogen, and tetrel bonds. In the case of strong interactions, the tetrahedral configuration of the Lewis acid centre changes into the trigonal bipyramid configuration. This change is in line with the Valence-Shell Electron-Pair Repulsion model, VSEPR, and this is supported here by the results of high-level ab initio calculations. The theoretical results concerning the geometries are supported mainly by the Natural Bond Orbital, NBO, method.
]]>Authors: Tugce Kalkan Florin F. Nichita Tahsin Oner Ibrahim Senturk Mehmet Terziler
The current paper explores the potential of the areas between mathematics and poetry. We will first recall some definitions and results that are needed to construct solutions of the Yang–Baxter equation. A new duality principle is presented and Boolean coalgebras are introduced. A section on poetry dedicated to the Yang–Baxter equation is presented, and a discussion on a poem related to a mathematical formula follows. The final section presents our conclusions and further information on these topics.
]]>Authors: Vishakha Singh Amit Khurana Umashanker Navik Prince Allawadhi Kala Kumar Bharani Ralf Weiskirchen
Apoptosis is an evolutionarily conserved sequential process of cell death to maintain a homeostatic balance between cell formation and cell death. It is a vital process for normal eukaryotic development as it contributes to the renewal of cells and tissues. Further, it plays a crucial role in the elimination of unnecessary cells through phagocytosis and prevents undesirable immune responses. Apoptosis is regulated by a complex signaling mechanism, which is driven by interactions among several protein families such as caspases, inhibitors of apoptosis proteins, B-cell lymphoma 2 (BCL-2) family proteins, and several other proteases such as perforins and granzyme. The signaling pathway consists of both pro-apoptotic and pro-survival members, which stabilize the selection of cellular survival or death. However, any aberration in this pathway can lead to abnormal cell proliferation, ultimately leading to the development of cancer, autoimmune disorders, etc. This review aims to elaborate on apoptotic signaling pathways and mechanisms, interacting members involved in signaling, and how apoptosis is associated with carcinogenesis, along with insights into targeting apoptosis for disease resolution.
]]>