Special Issue "Multimedia Indexing and Retrieval"

A special issue of Technologies (ISSN 2227-7080). This special issue belongs to the section "Information and Communication Technologies".

Deadline for manuscript submissions: 30 November 2021.

Special Issue Editors

Dr. Abebe Rorissa
E-Mail Website
Guest Editor
College of Emergency Preparedness, Homeland Security and Cybersecurity, University at Albany, Albany, USA
Interests: multimedia information organization and retrieval; measurement and scaling of users’ information needs; users’ perceptions of multimedia information sources and services
Dr. Xiaojun (Jenny) Yuan
E-Mail Website
Guest Editor
College of Emergency Preparedness, Homeland Security and Cybersecurity, University at Albany, Albany, USA
Interests: information retrieval; human–computer interaction; user interface design and evaluation; voice search

Special Issue Information

Advances in and proliferation of multimedia technologies exponentially increased the number of digital images, videos, audio, and textual information sources, making the creation, management, indexing, retrieval, use, and dissemination of multimedia information a challenge. The broad scope, demand, and applications of multimedia information, including in education, healthcare, entertainment, and security, as well as recent developments in data science, machine learning, and search engines, create conducive conditions and necessitate continuous improvements to multimedia indexing and retrieval systems that meet users’ needs. In addition, the increased demand for and use of multimedia information make their indexing and retrieval a very important area of research. The semantic nature of multimedia indexing and retrieval adds extra layers to the challenge.

Despite recent advances, these challenges still remain. While the extant research on and systems for multimedia indexing and retrieval have employed various methods, including categorization, filtering, and summarizing, the quest for bridging the semantic gap remains a critical and elusive issue. A few of the questions that need to be fully answered include: How can multimodal features be synchronized for indexing and retrieval? How can features with high dimensionality be efficiently indexed to satisfy user needs? How can we ensure retrieval accuracy and relevance to satisfy users’ needs given the vast amount and diversity of data in a multimedia collection?

Advances in relevant technologies have also made it possible for people to expect a better user experience when searching for multimedia information. Hence, human factors and human interface design should also be taken into serious consideration when designing multimedia indexing and retrieval systems. In particular, how to address issues related to multimodal human computer interface and human computer interaction in multi-media indexing and retrieval needs extra attention.

In light of these challenges, for this Special Issue, you are encouraged to submit insights, opinions, and innovative research work on, but not limited to, the following topics of interest:

  • Content-based multimedia indexing and retrieval;
  • Intelligent multimedia indexing and retrieval;
  • Multimedia semantics;
  • Color and texture-based multimedia indexing and retrieval;
  • Pattern recognition;
  • Computer vision;
  • Image/video processing and analysis;
  • Speech recognition;
  • Evaluation of multimedia indexing and retrieval techniques;
  • Human factor in multimedia indexing and retrieval techniques;
  • Multimodal human–computer interface.

Dr. Abebe Rorissa
Dr. Xiaojun (Jenny) Yuan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Technologies is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multimedia
  • Indexing
  • Retrieval
  • Pattern recognition
  • Computer vision
  • Image processing
  • Video analysis
  • Speech recognition

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Computer Vision Framework for Wheat Disease Identification and Classification Using Jetson GPU Infrastructure
Technologies 2021, 9(3), 47; https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9030047 - 02 Jul 2021
Viewed by 742
Abstract
Diseases have adverse effects on crop production and yield loss. Various diseases such as leaf rust, stem rust, and strip rust can affect yield quality and quantity for a studied area. In addition, manual wheat disease identification and interpretation is time-consuming and cumbersome. [...] Read more.
Diseases have adverse effects on crop production and yield loss. Various diseases such as leaf rust, stem rust, and strip rust can affect yield quality and quantity for a studied area. In addition, manual wheat disease identification and interpretation is time-consuming and cumbersome. Currently, decisions related to plants mainly rely on the level of expertise in the domain. To resolve these challenges and to identify wheat disease as early as possible, we implemented different deep learning models such as Inceptionv3, Resnet50, and VGG16/19. This research was conducted in collaboration with Bishoftu Agricultural Research Institute, Ethiopia. Our main objective was to automate plant-disease identification using advanced deep learning approaches and image data. For the experiment, RGB image data were collected from the Bishoftu area. From the experimental results, the VGG19 model classified wheat disease with 99.38% accuracy. Full article
(This article belongs to the Special Issue Multimedia Indexing and Retrieval)
Show Figures

Figure 1

Article
Introducing Tagasaurus, an Approach to Reduce Cognitive Fatigue from Long-Term Interface Usage When Storing Descriptions and Impressions from Photographs
Technologies 2021, 9(3), 45; https://0-doi-org.brum.beds.ac.uk/10.3390/technologies9030045 - 29 Jun 2021
Viewed by 628
Abstract
Digital cameras and mobile phones have given people around the world the ability to take a large number of photos and store them on their computers. As these images serve the purpose of storing memories and bringing them to mind in the potentially [...] Read more.
Digital cameras and mobile phones have given people around the world the ability to take a large number of photos and store them on their computers. As these images serve the purpose of storing memories and bringing them to mind in the potentially far future, it is important to also store the impressions a user may have from them. Annotating these images can be a laborious process and the work here presents an application design and functioning implementation, which is openly available now, to ease the effort of this task. It also draws inspiration from interface developments of previous applications such as the Nokia Lifeblog and the Facebook user interface. A different mode of sentiment entry is provided where users interact with slider widgets rather than select a emoticon from a set to offer a more fine grained value. Special attention is made to avoid cognitive strain by avoiding nested tool selections. Full article
(This article belongs to the Special Issue Multimedia Indexing and Retrieval)
Show Figures

Figure 1

Back to TopTop