Recent Advances in Computer Graphics and Artificial Intelligence: From Computer Vision to Image Synthesis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 January 2022) | Viewed by 7574

Special Issue Editor


E-Mail Website
Guest Editor
Department of Information Technologies and Systems, University of Castilla-La Mancha, Paseo de la Universidad 4, 13071 Ciudad Real, Spain
Interests: intelligent agents; computer vision; augmented and virtual reality

Special Issue Information

In recent years, the connection between computer graphics and artificial intelligence has become much stronger. The accelerated advances in both disciplines open new doors of knowledge where the synergy of these two fields is offering surprising results.

This Special Issue is dedicated to the integration of both disciplines, covering the use of Artificial Intelligence techniques in the domain of computer graphics and in directly connected fields of knowledge, such as computer vision, image processing and information visualization.

These latest technological developments will be shared through this Special Issue. New types of applications utilizing intelligent techniques are also welcome. Potential topics include, but are not limited to:

Declarative techniques in the 2D/3D modeling of scenes;
Automatic construction of virtual worlds;
Optimization of scenes, animations and rendering methods;
Intelligent resolution of analysis and synthesis problems in Computer Graphics;
Artificial intelligence techniques in image rendering / synthesis;
Generative Adversarial Networks in Computer Graphics and Computer Vision;
Artificial intelligence techniques for improving animations;
Automatic and behavioral animation and physics;
Semantic description techniques of image and scene properties;
Pose detection and segmentation;
Artificial intelligence techniques in image recognition;
Noise removal in realistic synthesis methods;
Image translation;
Intelligent graphic interfaces;
Machine learning in computer vision and image synthesis;
Collaborative design and intelligent support systems;
Intelligent information visualization…

Prof. Dr. Carlos Gonzalez Morcillo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Declarative techniques in the 2D/3D modeling of scenes
  • Automatic construction of virtual worlds
  • Optimization of scenes, animations and rendering methods
  • Intelligent resolution of analysis and synthesis problems in computer graphics
  • Artificial intelligence techniques in image rendering/synthesis
  • Generative adversarial networks in computer graphics and computer vision
  • Artificial intelligence techniques for improving animations
  • Automatic and behavioral animation and physics
  • Semantic description techniques of image and scene properties
  • Pose detection and segmentation
  • Artificial intelligence techniques in image recognition
  • Noise removal in realistic synthesis methods
  • Image translation
  • Intelligent graphic interfaces
  • Machine learning in computer vision and image synthesis
  • Collaborative design and intelligent support systems
  • Intelligent information visualization…

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2570 KiB  
Article
Short Text Aspect-Based Sentiment Analysis Based on CNN + BiGRU
by Ziwen Gao, Zhiyi Li, Jiaying Luo and Xiaolin Li
Appl. Sci. 2022, 12(5), 2707; https://0-doi-org.brum.beds.ac.uk/10.3390/app12052707 - 05 Mar 2022
Cited by 27 | Viewed by 3681
Abstract
This paper describes the construction a short-text aspect-based sentiment analysis method based on Convolutional Neural Network (CNN) and Bidirectional Gating Recurrent Unit (BiGRU). The hybrid model can fully extract text features, solve the problem of long-distance dependence on the sequence, and improve the [...] Read more.
This paper describes the construction a short-text aspect-based sentiment analysis method based on Convolutional Neural Network (CNN) and Bidirectional Gating Recurrent Unit (BiGRU). The hybrid model can fully extract text features, solve the problem of long-distance dependence on the sequence, and improve the reliability of training. This article reports empirical research conducted on the basis of literature research. The first step was to obtain the dataset and perform preprocessing, after which scikit-learn was used to perform TF-IDF calculations to obtain the feature word vector weight, obtain the aspect-level feature ontology words of the evaluated text, and manually mark the ontology of the reviewed text and the corresponding sentiment analysis polarity. In the sentiment analysis section, a hybrid model based on CNN and BiGRU (CNN + BiGRU) was constructed, which uses corpus sentences and feature words as the vector input and predicts the emotional polarity. The experimental results prove that the classification accuracy of the improved CNN + BiGRU model was improved by 12.12%, 8.37%, and 4.46% compared with the Convolutional Neural Network model (CNN), Long-Short Term Memory model (LSTM), and Convolutional Neural Network (C-LSTM) model. Full article
Show Figures

Figure 1

20 pages, 5007 KiB  
Article
Improvement of One-Shot-Learning by Integrating a Convolutional Neural Network and an Image Descriptor into a Siamese Neural Network
by Jaime Duque Domingo, Roberto Medina Aparicio and Luis Miguel González Rodrigo
Appl. Sci. 2021, 11(17), 7839; https://0-doi-org.brum.beds.ac.uk/10.3390/app11177839 - 25 Aug 2021
Cited by 5 | Viewed by 2273
Abstract
Over the last few years, several techniques have been developed with the aim of implementing one-shot learning, a concept that allows classifying images with only a single image per training category. Conceptually, these methods seek to reproduce certain behavior that humans have. People [...] Read more.
Over the last few years, several techniques have been developed with the aim of implementing one-shot learning, a concept that allows classifying images with only a single image per training category. Conceptually, these methods seek to reproduce certain behavior that humans have. People are able to recognize a person they have only seen once, but they are probably not able to do the same with certain animals, such as a monkey. This is because our brains have been trained for years with images of people but not so much of animals. Among the one-shot learning techniques, some of them have used data generation, such as Generative Adversarial Networks (GAN). Other techniques have been based on the matching of descriptors traditionally used for object detection. Finally, one of the most prominent techniques involves using Siamese neural networks. Siamese networks are usually implemented with two convolutional nets that share their weights. They receive two images as input and can detect whether they belong to the same category or not. In the field of grocery products, there has been a lot of research on the one-shot learning problem but not so much on the use of Siamese networks. In this paper, several classifiers are firstly evaluated to decide on a convolutional model to be used with the Siamese and to improve the baseline results obtained in the dataset used. Then, two existing techniques are integrated within the Siamese model: a convolutional net and a Local Maximal Occurrence (LOMO) descriptor. The latter was initially used for the re-identification of people although it has shown its effectiveness to improve the values of a traditional Siamese with only convolutional sisters. The whole network is trained on categories and responds to different categories, showing its strong capacity to deal with the problem of having only one image per category. Full article
Show Figures

Figure 1

Back to TopTop