Topic Editors

Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
Prof. Dr. Ioannis Voyiatzis
Department of Informatics and Computer Engineering, University of West Attica, 12243 Egaleo, Greece
School of Electrical and Computer Engineering, National Technical University of Athens, 9, Iroon Polytechniou st., 157 80 Athens, Greece

Interactive Artificial Intelligence and Man-Machine Communication

Abstract submission deadline
closed (20 December 2023)
Manuscript submission deadline
closed (20 February 2024)
Viewed by
30855

Topic Information

Dear Colleagues,

Interactive artificial intelligence is based on the perception that human intelligence is characterized by interactivity. Many of the fascinating and core research issues in artificial intelligence involve topics for which the aim of smart and adaptive systems is to interact with people on their own terms. Indeed, in this digital era, digital systems are growing at an unprecedented rate. The wishes of users to interact with tailored content are ever increasing. This means that users are seeking intelligent software with greatly individualized user experiences (UXs), not only adaptive user interfaces (UIs). Therefore, the need for redefining traditional system development is of utmost importance. As such, incorporating sophisticated mechanisms into the development of robust systems, demonstrating the usefulness of this way of thinking, and into the development of fundamental algorithms, for disruptive technological features adjusted to human needs, is relevant. Such systems can exhibit a high degree of intelligent man–machine communication and UXs, user-centric features, and intelligence in their reasoning and diagnostic mechanisms. In recent decades, research efforts have focused on promoting man–machine communication and interactive artificial intelligence. In spite of the increased research interest, there is still room for further research on the directions of man–machine communication, interactivity, and artificial intelligence.

The present call-for-papers is requesting original research papers as well as review articles and short communications in the aforementioned areas. The topics of interest include, but are not limited to, the following:

  • Human–computer interaction;
  • Personalization and adaptivity in systems and services;
  • Machine/deep/reinforcement learning;
  • Collaborative and group work, communities of practice, and social networks;
  • Immersive and virtual reality environments;
  • Ubiquitous, mobile, and cloud environments;
  • Adaptive support for navigation, models of users, diagnosis, reasoning, and feedback;
  • The aspect of the modeling of motivation, metacognition, and affect;
  • Affective computing; Applications of machine learning to address real-world problems.

Dr. Christos Troussas
Prof. Dr. Cleo Sgouropoulou
Dr. Akrivi Krouska
Prof. Dr. Ioannis Voyiatzis
Dr. Athanasios Voulodimos
Topic Editors

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Entropy
entropy
2.7 4.7 1999 20.8 Days CHF 2600
Future Internet
futureinternet
3.4 6.7 2009 11.8 Days CHF 1600
Algorithms
algorithms
2.3 3.7 2008 15 Days CHF 1600
Computation
computation
2.2 3.3 2013 18 Days CHF 1800
Machine Learning and Knowledge Extraction
make
3.9 8.5 2019 19.9 Days CHF 1800
Multimodal Technologies and Interaction
mti
2.5 4.3 2017 14 Days CHF 1600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (9 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
28 pages, 4142 KiB  
Review
Optimal Stimulus Properties for Steady-State Visually Evoked Potential Brain–Computer Interfaces: A Scoping Review
by Clemens Reitelbach and Kiemute Oyibo
Multimodal Technol. Interact. 2024, 8(2), 6; https://0-doi-org.brum.beds.ac.uk/10.3390/mti8020006 - 24 Jan 2024
Viewed by 1519
Abstract
Brain–computer interfaces (BCIs) based on steady-state visually evoked potentials (SSVEPs) have been well researched due to their easy system configuration, little or no user training and high information transfer rates. To elicit an SSVEP, a repetitive visual stimulus (RVS) is presented to the [...] Read more.
Brain–computer interfaces (BCIs) based on steady-state visually evoked potentials (SSVEPs) have been well researched due to their easy system configuration, little or no user training and high information transfer rates. To elicit an SSVEP, a repetitive visual stimulus (RVS) is presented to the user. The properties of this RVS (e.g., frequency, luminance) have a significant influence on the BCI performance and user comfort. Several studies in this area in the last one-and-half decades have focused on evaluating different stimulus parameters (i.e., properties). However, there is little research on the synthesis of the existing studies, as the last review on the subject was published in 2010. Consequently, we conducted a scoping review of related studies on the influence of stimulus parameters on SSVEP response and user comfort, analyzed them and summarized the findings considering the physiological and neurological processes associated with BCI performance. In the review, we found that stimulus type, frequency, color contrast, luminance contrast and size/shape of the retinal image are the most important stimulus properties that influence SSVEP response. Regarding stimulus type, frequency and luminance, there is a trade-off between the best SSVEP response quality and visual comfort. Finally, since there is no unified measuring method for visual comfort and a lack of differentiation in the high-frequency band, we proposed a measuring method and a division of the band. In summary, the review highlights which stimulus properties are important to consider when designing SSVEP BCIs. It can be used as a reference point for future research in BCI, as it will help researchers to optimize the design of their SSVEP stimuli. Full article
Show Figures

Figure 1

22 pages, 6112 KiB  
Article
Creative Use of OpenAI in Education: Case Studies from Game Development
by Fiona French, David Levi, Csaba Maczo, Aiste Simonaityte, Stefanos Triantafyllidis and Gergo Varda
Multimodal Technol. Interact. 2023, 7(8), 81; https://0-doi-org.brum.beds.ac.uk/10.3390/mti7080081 - 18 Aug 2023
Cited by 1 | Viewed by 4519
Abstract
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the [...] Read more.
Educators and students have shown significant interest in the potential for generative artificial intelligence (AI) technologies to support student learning outcomes, for example, by offering personalized experiences, 24 h conversational assistance, text editing and help with problem-solving. We review contemporary perspectives on the value of AI as a tool in an educational context and describe our recent research with undergraduate students, discussing why and how we integrated OpenAI tools ChatGPT and Dall-E into the curriculum during the 2022–2023 academic year. A small cohort of games programming students in the School of Computing and Digital Media at London Metropolitan University was given a research and development assignment that explicitly required them to engage with OpenAI. They were tasked with evaluating OpenAI tools in the context of game development, demonstrating a working solution and reporting on their findings. We present five case studies that showcase some of the outputs from the students and we discuss their work. This mode of assessment was both productive and popular, mapping to students’ interests and helping to refine their skills in programming, problem-solving, critical reflection and exploratory design. Full article
Show Figures

Figure 1

14 pages, 1216 KiB  
Article
Enhancing Object Detection for VIPs Using YOLOv4_Resnet101 and Text-to-Speech Conversion Model
by Tahani Jaser Alahmadi, Atta Ur Rahman, Hend Khalid Alkahtani and Hisham Kholidy
Multimodal Technol. Interact. 2023, 7(8), 77; https://0-doi-org.brum.beds.ac.uk/10.3390/mti7080077 - 02 Aug 2023
Cited by 6 | Viewed by 1548
Abstract
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for [...] Read more.
Vision impairment affects an individual’s quality of life, posing challenges for visually impaired people (VIPs) in various aspects such as object recognition and daily tasks. Previous research has focused on developing visual navigation systems to assist VIPs, but there is a need for further improvements in accuracy, speed, and inclusion of a wider range of object categories that may obstruct VIPs’ daily lives. This study presents a modified version of YOLOv4_Resnet101 as backbone networks trained on multiple object classes to assist VIPs in navigating their surroundings. In comparison to the Darknet, with a backbone utilized in YOLOv4, the ResNet-101 backbone in YOLOv4_Resnet101 offers a deeper and more powerful feature extraction network. The ResNet-101’s greater capacity enables better representation of complex visual patterns, which increases the accuracy of object detection. The proposed model is validated using the Microsoft Common Objects in Context (MS COCO) dataset. Image pre-processing techniques are employed to enhance the training process, and manual annotation ensures accurate labeling of all images. The module incorporates text-to-speech conversion, providing VIPs with auditory information to assist in obstacle recognition. The model achieves an accuracy of 96.34% on the test images obtained from the dataset after 4000 iterations of training, with a loss error rate of 0.073%. Full article
Show Figures

Figure 1

12 pages, 483 KiB  
Article
Automatic Generation of Literary Sentences in French
by Luis-Gil Moreno-Jiménez, Juan-Manuel Torres-Moreno and Roseli Suzi. Wedemann
Algorithms 2023, 16(3), 142; https://0-doi-org.brum.beds.ac.uk/10.3390/a16030142 - 06 Mar 2023
Viewed by 1159
Abstract
In this paper, we describe a model for the automatic generation of literary sentences in French. Although there has been much recent effort directed towards automatic text generation in general, the generation of creative, literary sentences that is not restricted to a specific [...] Read more.
In this paper, we describe a model for the automatic generation of literary sentences in French. Although there has been much recent effort directed towards automatic text generation in general, the generation of creative, literary sentences that is not restricted to a specific genre, which we approached in this work, is a difficult task that is not commonly treated in the scientific literature. In particular, our present model has not been previously applied to the generation of sentences in the French language. Our model was based on algorithms that we previously used to generate sentences in Spanish and Portuguese and on a new corpus, which we constructed and present here, consisting of literary texts in French, called MegaLitefr. Our automatic text generation algorithm combines language models, shallow parsing, the canned text method, and deep learning artificial neural networks. We also present a manual evaluation protocol that we propose and implemented to assess the quality of the artificial sentences generated by our algorithm, by testing if they fulfil four simple criteria. We obtained encouraging results from the evaluators for most of the desired features of our artificially generated sentences. Full article
Show Figures

Figure 1

16 pages, 1472 KiB  
Article
Can AI-Oriented Requirements Enhance Human-Centered Design of Intelligent Interactive Systems? Results from a Workshop with Young HCI Designers
by Pietro Battistoni, Marianna Di Gregorio, Marco Romano, Monica Sebillo and Giuliana Vitiello
Multimodal Technol. Interact. 2023, 7(3), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/mti7030024 - 25 Feb 2023
Cited by 2 | Viewed by 2316
Abstract
In this paper, we show that the evolution of artificial intelligence (AI) and its increased presence within an interactive system pushes designers to rethink the way in which AI and its users interact and to highlight users’ feelings towards AI. For novice designers, [...] Read more.
In this paper, we show that the evolution of artificial intelligence (AI) and its increased presence within an interactive system pushes designers to rethink the way in which AI and its users interact and to highlight users’ feelings towards AI. For novice designers, it is crucial to acknowledge that both the user and artificial intelligence possess decision-making capabilities. Such a process may involve mediation between humans and artificial intelligence. This process should also consider the mutual learning that can occur between the two entities over time. Therefore, we explain how to adapt the Human-Centered Design (HCD) process to give centrality to AI as the user, further empowering the interactive system, and to adapt the interaction design to the actual capabilities, limitations, and potentialities of AI. This is to encourage designers to explore the interactions between AI and humans and focus on the potential user experience. We achieve such centrality by extracting and formalizing a new category of AI requirements. We have provocatively named this extension: “Intelligence-Centered”. A design workshop with MsC HCI students was carried out as a case study supporting this change of perspective in design. Full article
Show Figures

Figure 1

22 pages, 1181 KiB  
Review
Dental Age Estimation Using Deep Learning: A Comparative Survey
by Essraa Gamal Mohamed, Rebeca P. Díaz Redondo, Abdelrahim Koura, Mohamed Sherif EL-Mofty and Mohammed Kayed
Computation 2023, 11(2), 18; https://0-doi-org.brum.beds.ac.uk/10.3390/computation11020018 - 29 Jan 2023
Cited by 3 | Viewed by 4532
Abstract
The significance of age estimation arises from its applications in various fields, such as forensics, criminal investigation, and illegal immigration. Due to the increased importance of age estimation, this area of study requires more investigation and development. Several methods for age estimation using [...] Read more.
The significance of age estimation arises from its applications in various fields, such as forensics, criminal investigation, and illegal immigration. Due to the increased importance of age estimation, this area of study requires more investigation and development. Several methods for age estimation using biometrics traits, such as the face, teeth, bones, and voice. Among then, teeth are quite convenient since they are resistant and durable and are subject to several changes from childhood to birth that can be used to derive age. In this paper, we summarize the common biometrics traits for age estimation and how this information has been used in previous research studies for age estimation. We have paid special attention to traditional machine learning methods and deep learning approaches used for dental age estimation. Thus, we summarized the advances in convolutional neural network (CNN) models to estimate dental age from radiological images, such as 3D cone-beam computed tomography (CBCT), X-ray, and orthopantomography (OPG) to estimate dental age. Finally, we also point out the main innovations that would potentially increase the performance of age estimation systems. Full article
Show Figures

Figure 1

19 pages, 1844 KiB  
Article
The Study of Mathematical Models and Algorithms for Face Recognition in Images Using Python in Proctoring System
by Ardak Nurpeisova, Anargul Shaushenova, Zhazira Mutalova, Zhandos Zulpykhar, Maral Ongarbayeva, Shakizada Niyazbekova, Alexander Semenov and Leila Maisigova
Computation 2022, 10(8), 136; https://0-doi-org.brum.beds.ac.uk/10.3390/computation10080136 - 09 Aug 2022
Cited by 7 | Viewed by 4462
Abstract
The article analyzes the possibility and rationality of using proctoring technology in remote monitoring of the progress of university students as a tool for identifying a student. Proctoring technology includes face recognition technology. Face recognition belongs to the field of artificial intelligence and [...] Read more.
The article analyzes the possibility and rationality of using proctoring technology in remote monitoring of the progress of university students as a tool for identifying a student. Proctoring technology includes face recognition technology. Face recognition belongs to the field of artificial intelligence and biometric recognition. It is a very successful application of image analysis and understanding. To implement the task of determining a person’s face in a video stream, the Python programming language was used with the OpenCV code. Mathematical models of face recognition are also described. These mathematical models are processed during data generation, face analysis and image classification. We considered methods that allow the processes of data generation, image analysis and image classification. We have presented algorithms for solving computer vision problems. We placed 400 photographs of 40 students on the base. The photographs were taken at different angles and used different lighting conditions; there were also interferences such as the presence of a beard, mustache, glasses, hats, etc. When analyzing certain cases of errors, it can be concluded that accuracy decreases primarily due to images with noise and poor lighting quality. Full article
Show Figures

Figure 1

21 pages, 323 KiB  
Article
Behaviour of True Artificial Peers
by Norman Weißkirchen and Ronald Böck
Multimodal Technol. Interact. 2022, 6(8), 64; https://0-doi-org.brum.beds.ac.uk/10.3390/mti6080064 - 02 Aug 2022
Cited by 1 | Viewed by 1576
Abstract
Typical current assistance systems often take the form of optimised user interfaces between the user interest and the capabilities of the system. In contrast, a peer-like system should be capable of independent decision-making capabilities, which in turn require an understanding and knowledge of [...] Read more.
Typical current assistance systems often take the form of optimised user interfaces between the user interest and the capabilities of the system. In contrast, a peer-like system should be capable of independent decision-making capabilities, which in turn require an understanding and knowledge of the current situation for performing a sensible decision-making process. We present a method for a system capable of interacting with their user to optimise their information-gathering task, while at the same time ensuring the necessary satisfaction with the system, so that the user may not be discouraged from further interaction. Based on this collected information, the system may then create and employ a specifically adapted rule-set base which is much closer to an intelligent companion than a typical technical user interface. A further aspect is the perception of the system as a trustworthy and understandable partner, allowing an empathetic understanding between the user and the system, leading to a closer integrated smart environment. Full article
Show Figures

Figure 1

25 pages, 1963 KiB  
Article
AI Technologies for Machine Supervision and Help in a Rehabilitation Scenario
by Gábor Baranyi, Bruno Carlos Dos Santos Melício, Zsófia Gaál, Levente Hajder, András Simonyi, Dániel Sindely, Joul Skaf, Ondřej Dušek, Tomáš Nekvinda and András Lőrincz
Multimodal Technol. Interact. 2022, 6(7), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/mti6070048 - 22 Jun 2022
Cited by 6 | Viewed by 3328
Abstract
We consider, evaluate, and develop methods for home rehabilitation scenarios. We show the required modules for this scenario. Due to the large number of modules, the framework falls into the category of Composite AI. Our work is based on collected videos with high-quality [...] Read more.
We consider, evaluate, and develop methods for home rehabilitation scenarios. We show the required modules for this scenario. Due to the large number of modules, the framework falls into the category of Composite AI. Our work is based on collected videos with high-quality execution and samples of typical errors. They are augmented by sample dialogues about the exercise to be executed and the assumed errors. We study and discuss body pose estimation technology, dialogue systems of different kinds and the emerging constraints of verbal communication. We demonstrate that the optimization of the camera and the body pose allows high-precision recording and requires the following components: (1) optimization needs a 3D representation of the environment, (2) a navigation dialogue to guide the patient to the optimal pose, (3) semantic and instance maps are necessary for verbal instructions about the navigation. We put forth different communication methods, from video-based presentation to chit-chat-like dialogues through rule-based methods. We discuss the methods for different aspects of the challenges that can improve the performance of the individual components. Due to the emerging solutions, we claim that the range of applications will drastically grow in the very near future. Full article
Show Figures

Figure 1

Back to TopTop