entropy-logo

Journal Browser

Journal Browser

Artificial Intelligence and Complexity in Art, Music, Games and Design

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Complexity".

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 30580

Special Issue Editors


E-Mail Website
Guest Editor
Computation Department, Universidade da Coruña, Coruña, A, Spain
Interests: artificial intelligence; artificial art; computational aesthetics; evolutionary computation; artificial neural networks; deep learning; machine learning; computational creativity
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
Interests: artificial intelligence; bio-inspired computation; evolutionary computation; neural networks; computational arts; computer music
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

A major—potentially unachievable—challenge in computational arts is constructing algorithms that assess properties such as novelty, creativity and the aesthetic properties of artistic artefacts or performances. Approaches to this have often been based on broadly information-theoretic ideas. For example, ideas linking mathematical notions of form and balance to beauty date back to ancient times. In the twentieth century, attempts were made to produce aesthetic measures based on ideas of a balance between order and complexity. In more recent years, these have been formalised into ideas of aesthetic engagement happening when work is at the “edge of chaos” between excessive order and excessive disorder; formalising this using notions such as the Gini coefficient and Shannon entropy; and links between cognitive theories of the Bayesian brain and free energy minimisation with aesthetic theories. These ideas have been used both for understanding human behaviour and building creative systems.

The use of Artificial Intelligence and complex systems for the development of artistic systems is an exciting and relevant area of research. There is a growing interest in the application of these techniques in fields, such as: visual art and music generation, analysis, and interpretation; sound synthesis; architecture; video; poetry; design; game content generation; and other creative tasks.

This Special Issue will focus on both the use of complexity ideas and artificial intelligence methods to analyse and evaluate aesthetic properties and to drive systems that generate aesthetically engaging artefacts, including but not limited to: music, sound, images, animations, designs, architectural plans, choreographies, poetry, text, jokes, etc.

Dr. Juan Romero
Dr. Colin Johnson
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computational aesthetics
  • Formalising ideas of aesthetics using ideas from entropy and information theory
  • Computational Creativity
  • Artificial Intelligence in art, design, architecture, music and games
  • Information Theory in art, design, architecture, music and games
  • Complex systems in art, music and design
  • Evolutionary art
  • Evolutionary music
  • Artificial life in arts
  • Swarm art
  • Pattern recognition and aesthetics
  • Cellular automata in architecture
  • Computational intelligence in arts

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 25880 KiB  
Article
Improving Deep Interactive Evolution with a Style-Based Generator for Artistic Expression and Creative Exploration
by Carlos Tejeda-Ocampo, Armando López-Cuevas and Hugo Terashima-Marin
Entropy 2021, 23(1), 11; https://0-doi-org.brum.beds.ac.uk/10.3390/e23010011 - 24 Dec 2020
Cited by 2 | Viewed by 2963
Abstract
Deep interactive evolution (DeepIE) combines the capacity of interactive evolutionary computation (IEC) to capture a user’s preference with the domain-specific robustness of a trained generative adversarial network (GAN) generator, allowing the user to control the GAN output through evolutionary exploration of the latent [...] Read more.
Deep interactive evolution (DeepIE) combines the capacity of interactive evolutionary computation (IEC) to capture a user’s preference with the domain-specific robustness of a trained generative adversarial network (GAN) generator, allowing the user to control the GAN output through evolutionary exploration of the latent space. However, the traditional GAN latent space presents feature entanglement, which limits the practicability of possible applications of DeepIE. In this paper, we implement DeepIE within a style-based generator from a StyleGAN model trained on the WikiArt dataset and propose StyleIE, a variation of DeepIE that takes advantage of the secondary disentangled latent space in the style-based generator. We performed two AB/BA crossover user tests that compared the performance of DeepIE against StyleIE for art generation. Self-rated evaluations of the performance were collected through a questionnaire. Findings from the tests suggest that StyleIE and DeepIE perform equally in tasks with open-ended goals with relaxed constraints, but StyleIE performs better in close-ended and more constrained tasks. Full article
Show Figures

Figure 1

41 pages, 1458 KiB  
Article
New Interfaces and Approaches to Machine Learning When Classifying Gestures within Music
by Chris Rhodes, Richard Allmendinger and Ricardo Climent
Entropy 2020, 22(12), 1384; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121384 - 07 Dec 2020
Cited by 6 | Viewed by 4076
Abstract
Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs [...] Read more.
Interactive music uses wearable sensors (i.e., gestural interfaces—GIs) and biometric datasets to reinvent traditional human–computer interaction and enhance music composition. In recent years, machine learning (ML) has been important for the artform. This is because ML helps process complex biometric datasets from GIs when predicting musical actions (termed performance gestures). ML allows musicians to create novel interactions with digital media. Wekinator is a popular ML software amongst artists, allowing users to train models through demonstration. It is built on the Waikato Environment for Knowledge Analysis (WEKA) framework, which is used to build supervised predictive models. Previous research has used biometric data from GIs to train specific ML models. However, previous research does not inform optimum ML model choice, within music, or compare model performance. Wekinator offers several ML models. Thus, we used Wekinator and the Myo armband GI and study three performance gestures for piano practice to solve this problem. Using these, we trained all models in Wekinator and investigated their accuracy, how gesture representation affects model accuracy and if optimisation can arise. Results show that neural networks are the strongest continuous classifiers, mapping behaviour differs amongst continuous models, optimisation can occur and gesture representation disparately affects model mapping behaviour; impacting music practice. Full article
Show Figures

Figure 1

22 pages, 581 KiB  
Article
A Genetic Programming-Based Low-Level Instructions Robot for Realtimebattle
by Juan Romero, Antonino Santos, Adrian Carballal, Nereida Rodriguez-Fernandez, Iria Santos, Alvaro Torrente-Patiño, Juan Tuñas and Penousal Machado
Entropy 2020, 22(12), 1362; https://0-doi-org.brum.beds.ac.uk/10.3390/e22121362 - 30 Nov 2020
Cited by 2 | Viewed by 2206
Abstract
RealTimeBattle is an environment in which robots controlled by programs fight each other. Programs control the simulated robots using low-level messages (e.g., turn radar, accelerate). Unlike other tools like Robocode, each of these robots can be developed using different programming languages. Our purpose [...] Read more.
RealTimeBattle is an environment in which robots controlled by programs fight each other. Programs control the simulated robots using low-level messages (e.g., turn radar, accelerate). Unlike other tools like Robocode, each of these robots can be developed using different programming languages. Our purpose is to generate, without human programming or other intervention, a robot that is highly competitive in RealTimeBattle. To that end, we implemented an Evolutionary Computation technique: Genetic Programming. The robot controllers created in the course of the experiments exhibit several different and effective combat strategies such as avoidance, sniping, encircling and shooting. To further improve their performance, we propose a function-set that includes short-term memory mechanisms, which allowed us to evolve a robot that is superior to all of the rivals used for its training. The robot was also tested in a bout with the winner of the previous “RealTimeBattle Championship”, which it won. Finally, our robot was tested in a multi-robot battle arena, with five simultaneous opponents, and obtained the best results among the contenders. Full article
Show Figures

Figure 1

19 pages, 941 KiB  
Article
Generating Artificial Reverberation via Genetic Algorithms for Real-Time Applications
by Edward Ly and Julián Villegas
Entropy 2020, 22(11), 1309; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111309 - 17 Nov 2020
Cited by 2 | Viewed by 2841
Abstract
We introduce a Virtual Studio Technology (VST) 2 audio effect plugin that performs convolution reverb using synthetic Room Impulse Responses (RIRs) generated via a Genetic Algorithm (GA). The parameters of the plugin include some of those defined under the ISO 3382-1 standard (e.g., [...] Read more.
We introduce a Virtual Studio Technology (VST) 2 audio effect plugin that performs convolution reverb using synthetic Room Impulse Responses (RIRs) generated via a Genetic Algorithm (GA). The parameters of the plugin include some of those defined under the ISO 3382-1 standard (e.g., reverberation time, early decay time, and clarity), which are used to determine the fitness values of potential RIRs so that the user has some control over the shape of the resulting RIRs. In the GA, these RIRs are initially generated via a custom Gaussian noise method, and then evolve via truncation selection, random weighted average crossover, and mutation via Gaussian multiplication in order to produce RIRs that resemble real-world, recorded ones. Binaural Room Impulse Responses (BRIRs) can also be generated by assigning two different RIRs to the left and right stereo channels. With the proposed audio effect, new RIRs that represent virtual rooms, some of which may even be impossible to replicate in the physical world, can be generated and stored. Objective evaluation of the GA shows that contradictory combinations of parameter values will produce RIRs with low fitness. Additionally, through subjective evaluation, it was determined that RIRs generated by the GA were still perceptually distinguishable from similar real-world RIRs, but the perceptual differences were reduced when longer execution times were used for generating the RIRs or the unprocessed audio signals were comprised of only speech. Full article
Show Figures

Figure 1

30 pages, 7770 KiB  
Article
A Computational Model of Tonal Tension Profile of Chord Progressions in the Tonal Interval Space
by María Navarro-Cáceres, Marcelo Caetano, Gilberto Bernardes, Mercedes Sánchez-Barba and Javier Merchán Sánchez-Jara
Entropy 2020, 22(11), 1291; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111291 - 13 Nov 2020
Cited by 9 | Viewed by 3889
Abstract
In tonal music, musical tension is strongly associated with musical expression, particularly with expectations and emotions. Most listeners are able to perceive musical tension subjectively, yet musical tension is difficult to be measured objectively, as it is connected with musical parameters such as [...] Read more.
In tonal music, musical tension is strongly associated with musical expression, particularly with expectations and emotions. Most listeners are able to perceive musical tension subjectively, yet musical tension is difficult to be measured objectively, as it is connected with musical parameters such as rhythm, dynamics, melody, harmony, and timbre. Musical tension specifically associated with melodic and harmonic motion is called tonal tension. In this article, we are interested in perceived changes of tonal tension over time for chord progressions, dubbed tonal tension profiles. We propose an objective measure capable of capturing tension profile according to different tonal music parameters, namely, tonal distance, dissonance, voice leading, and hierarchical tension. We performed two experiments to validate the proposed model of tonal tension profile and compared against Lerdahl’s model and MorpheuS across 12 chord progressions. Our results show that the considered four tonal parameters contribute differently to the perception of tonal tension. In our model, their relative importance adopts the following weights, summing to unity: dissonance (0.402), hierarchical tension (0.246), tonal distance (0.202), and voice leading (0.193). The assumption that listeners perceive global changes in tonal tension as prototypical profiles is strongly suggested in our results, which outperform the state-of-the-art models. Full article
Show Figures

Figure 1

16 pages, 20160 KiB  
Article
Generative Art with Swarm Landscapes
by Diogo de Andrade, Nuno Fachada, Carlos M. Fernandes and Agostinho C. Rosa
Entropy 2020, 22(11), 1284; https://0-doi-org.brum.beds.ac.uk/10.3390/e22111284 - 12 Nov 2020
Cited by 5 | Viewed by 3975
Abstract
We present a generative swarm art project that creates 3D animations by running a Particle Swarm Optimization algorithm over synthetic landscapes produced by an objective function. Different kinds of functions are explored, including mathematical expressions, Perlin noise-based terrain, and several image-based procedures. A [...] Read more.
We present a generative swarm art project that creates 3D animations by running a Particle Swarm Optimization algorithm over synthetic landscapes produced by an objective function. Different kinds of functions are explored, including mathematical expressions, Perlin noise-based terrain, and several image-based procedures. A method for displaying the particle swarm exploring the search space in aesthetically pleasing ways is described. Several experiments are detailed and analyzed and a number of interesting visual artifacts are highlighted. Full article
Show Figures

Figure 1

15 pages, 609 KiB  
Article
On-The-Fly Syntheziser Programming with Fuzzy Rule Learning
by Iván Paz, Àngela Nebot, Francisco Mugica and Enrique Romero
Entropy 2020, 22(9), 969; https://0-doi-org.brum.beds.ac.uk/10.3390/e22090969 - 31 Aug 2020
Cited by 1 | Viewed by 2131
Abstract
This manuscript explores fuzzy rule learning for sound synthesizer programming within the performative practice known as live coding. In this practice, sound synthesis algorithms are programmed in real time by means of source code. To facilitate this, one possibility is to automatically create [...] Read more.
This manuscript explores fuzzy rule learning for sound synthesizer programming within the performative practice known as live coding. In this practice, sound synthesis algorithms are programmed in real time by means of source code. To facilitate this, one possibility is to automatically create variations out of a few synthesizer presets. However, the need for real-time feedback makes existent synthesizer programmers unfeasible to use. In addition, sometimes presets are created mid-performance and as such no benchmarks exist. Inductive rule learning has shown to be effective for creating real-time variations in such a scenario. However, logical IF-THEN rules do not cover the whole feature space. Here, we present an algorithm that extends IF-THEN rules to hyperrectangles, which are used as the cores of membership functions to create a map of the input space. To generalize the rules, the contradictions are solved by a maximum volume heuristics. The user controls the novelty-consistency balance with respect to the input data using the algorithm parameters. The algorithm was evaluated in live performances and by cross-validation using extrinsic-benchmarks and a dataset collected during user tests. The model’s accuracy achieves state-of-the-art results. This, together with the positive criticism received from live coders that tested our methodology, suggests that this is a promising approach. Full article
Show Figures

Figure 1

23 pages, 3116 KiB  
Article
Gaze Information Channel in Van Gogh’s Paintings
by Qiaohong Hao, Lijing Ma, Mateu Sbert, Miquel Feixas and Jiawan Zhang
Entropy 2020, 22(5), 540; https://0-doi-org.brum.beds.ac.uk/10.3390/e22050540 - 12 May 2020
Cited by 2 | Viewed by 3215
Abstract
This paper uses quantitative eye tracking indicators to analyze the relationship between images of paintings and human viewing. First, we build the eye tracking fixation sequences through areas of interest (AOIs) into an information channel, the gaze channel. Although this channel can be [...] Read more.
This paper uses quantitative eye tracking indicators to analyze the relationship between images of paintings and human viewing. First, we build the eye tracking fixation sequences through areas of interest (AOIs) into an information channel, the gaze channel. Although this channel can be interpreted as a generalization of a first-order Markov chain, we show that the gaze channel is fully independent of this interpretation, and stands even when first-order Markov chain modeling would no longer fit. The entropy of the equilibrium distribution and the conditional entropy of a Markov chain are extended with additional information-theoretic measures, such as joint entropy, mutual information, and conditional entropy of each area of interest. Then, the gaze information channel is applied to analyze a subset of Van Gogh paintings. Van Gogh artworks, classified by art critics into several periods, have been studied under computational aesthetics measures, which include the use of Kolmogorov complexity and permutation entropy. The gaze information channel paradigm allows the information-theoretic measures to analyze both individual gaze behavior and clustered behavior from observers and paintings. Finally, we show that there is a clear correlation between the gaze information channel quantities that come from direct human observation, and the computational aesthetics measures that do not rely on any human observation at all. Full article
Show Figures

Figure 1

14 pages, 603 KiB  
Article
Comparison of Outlier-Tolerant Models for Measuring Visual Complexity
by Adrian Carballal, Carlos Fernandez-Lozano, Nereida Rodriguez-Fernandez, Iria Santos and Juan Romero
Entropy 2020, 22(4), 488; https://0-doi-org.brum.beds.ac.uk/10.3390/e22040488 - 24 Apr 2020
Cited by 6 | Viewed by 2850
Abstract
Providing the visual complexity of an image in terms of impact or aesthetic preference can be of great applicability in areas such as psychology or marketing. To this end, certain areas such as Computer Vision have focused on identifying features and computational models [...] Read more.
Providing the visual complexity of an image in terms of impact or aesthetic preference can be of great applicability in areas such as psychology or marketing. To this end, certain areas such as Computer Vision have focused on identifying features and computational models that allow for satisfactory results. This paper studies the application of recent ML models using input images evaluated by humans and characterized by features related to visual complexity. According to the experiments carried out, it was confirmed that one of these methods, Correlation by Genetic Search (CGS), based on the search for minimum sets of features that maximize the correlation of the model with respect to the input data, predicted human ratings of image visual complexity better than any other model referenced to date in terms of correlation, RMSE or minimum number of features required by the model. In addition, the variability of these terms were studied eliminating images considered as outliers in previous studies, observing the robustness of the method when selecting the most important variables to make the prediction. Full article
Show Figures

Figure 1

Back to TopTop