Special Issues on Languages Processing

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 October 2017) | Viewed by 32093

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Media Arts and Design School, Polytechnic of Porto, 4200-465 Porto, Portugal
Interests: computer programming education; gamification; knowledge management systems; e-learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We often use languages. First, to communicate between ourselves. Later, to communicate with computers. In addition, more recently, with the advent of networks, we found a way to make computers communicate among themselves. All these different forms of communication use languages, different languages, but languages that still share many similarities. In this Special Issue, we will publish an extended version of best papers selected from Symposium on Languages, Applications, and Technologies (SLATE'17).

In this Special Issue, the three types of processing languages are addressed: Human–Human (HHL), Human–Computer (HCL) and Computer–Computer Languages (CCL).

Prof. Ricardo Queirós
Prof. Mário Pinto
Prof. Filipe Portela
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 270 KiB  
Article
A Survey on Portuguese Lexical Knowledge Bases: Contents, Comparison and Combination
by Hugo Gonçalo Oliveira
Information 2018, 9(2), 34; https://0-doi-org.brum.beds.ac.uk/10.3390/info9020034 - 02 Feb 2018
Cited by 15 | Viewed by 4200
Abstract
In the last decade, several lexical-semantic knowledge bases (LKBs) were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure, [...] Read more.
In the last decade, several lexical-semantic knowledge bases (LKBs) were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure, and overlapping contents. However, we go further and exploit all of the analysed LKBs in the creation of new LKBs, based on the redundant contents. Both original and redundancy-based LKBs are then compared, indirectly, based on the performance of automatic procedures that exploit them for solving four different semantic analysis tasks. In addition to conclusions on the performance of the original LKBs, results show that, instead of selecting a single LKB to use, it is generally worth combining the contents of all the open Portuguese LKBs, towards better results. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
19 pages, 299 KiB  
Article
CSS Preprocessing: Tools and Automation Techniques
by Ricardo Queirós
Information 2018, 9(1), 17; https://0-doi-org.brum.beds.ac.uk/10.3390/info9010017 - 12 Jan 2018
Cited by 3 | Viewed by 6232
Abstract
Cascading Style Sheets (CSS) is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development [...] Read more.
Cascading Style Sheets (CSS) is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development has changed dramatically with the appearance of several languages and tools aiming to help developers build clean, modular and performance-aware CSS. These new approaches give developers mechanisms to preprocess CSS rules through the use of programming constructs, defined as CSS preprocessors, with the ultimate goal to bring those missing constructs to the CSS realm and to foster stylesheets structured programming. At the same time, a new set of tools appeared, defined as postprocessors, for extension and automation purposes covering a broad set of features ranging from identifying unused and duplicate code to applying vendor prefixes. With all these tools and techniques in hands, developers need to provide a consistent workflow to foster CSS modular coding. This paper aims to present an introductory survey on the CSS processors. The survey gathers information on a specific set of processors, categorizes them and compares their features regarding a set of predefined criteria such as: maturity, coverage and performance. Finally, we propose a basic set of best practices in order to setup a simple and pragmatic styling code workflow. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Show Figures

Figure 1

24 pages, 1534 KiB  
Article
Automata Approach to XML Data Indexing
by Eliška Šestáková and Jan Janoušek
Information 2018, 9(1), 12; https://0-doi-org.brum.beds.ac.uk/10.3390/info9010012 - 06 Jan 2018
Cited by 1 | Viewed by 6039
Abstract
The internal structure of XML documents can be viewed as a tree. Trees are among the fundamental and well-studied data structures in computer science. They express a hierarchical structure and are widely used in many applications. This paper focuses on the problem of [...] Read more.
The internal structure of XML documents can be viewed as a tree. Trees are among the fundamental and well-studied data structures in computer science. They express a hierarchical structure and are widely used in many applications. This paper focuses on the problem of processing tree data structures; particularly, it studies the XML index problem. Although there exist many state-of-the-art methods, the XML index problem still belongs to the active research areas. However, existing methods usually lack clear references to a systematic approach to the standard theory of formal languages and automata. Therefore, we present some new methods solving the XML index problem using the automata theory. These methods are simple and allow one to efficiently process a small subset of XPath. Thus, having an XML data structure, our methods can be used efficiently as auxiliary data structures that enable answering a particular set of queries, e.g., XPath queries using any combination of the child and descendant-or-self axes. Given an XML tree model with n nodes, the searching phase uses the index, reads an input query of size m, finds the answer in time O ( m ) and does not depend on the size of the original XML document. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Show Figures

Figure 1

19 pages, 1149 KiB  
Article
EmoSpell, a Morphological and Emotional Word Analyzer
by Maria Inês Maia and José Paulo Leal
Information 2018, 9(1), 1; https://0-doi-org.brum.beds.ac.uk/10.3390/info9010001 - 03 Jan 2018
Cited by 2 | Viewed by 5633
Abstract
The analysis of sentiments, emotions, and opinions in texts is increasingly important in the current digital world. The existing lexicons with emotional annotations for the Portuguese language are oriented to polarities, classifying words as positive, negative, or neutral. To identify the emotional load [...] Read more.
The analysis of sentiments, emotions, and opinions in texts is increasingly important in the current digital world. The existing lexicons with emotional annotations for the Portuguese language are oriented to polarities, classifying words as positive, negative, or neutral. To identify the emotional load intended by the author, it is necessary to also categorize the emotions expressed by individual words. EmoSpell is an extension of a morphological analyzer with semantic annotations of the emotional value of words. It uses Jspell as the morphological analyzer and a new dictionary with emotional annotations. This dictionary incorporates the lexical base EMOTAIX.PT, which classifies words based on three different levels of emotions—global, specific, and intermediate. This paper describes the generation of the EmoSpell dictionary using three sources: the Jspell Portuguese dictionary and the lexical bases EMOTAIX.PT and SentiLex-PT. Additionally, this paper details the Web application and Web service that exploit this dictionary. It also presents a validation of the proposed approach using a corpus of student texts with different emotional loads. The validation compares the analyses provided by EmoSpell with the mentioned emotional lexical bases on the ability to recognize emotional words and extract the dominant emotion from a text. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Show Figures

Figure 1

291 KiB  
Article
Source Code Documentation Generation Using Program Execution
by Matúš Sulír and Jaroslav Porubän
Information 2017, 8(4), 148; https://0-doi-org.brum.beds.ac.uk/10.3390/info8040148 - 17 Nov 2017
Cited by 6 | Viewed by 4563
Abstract
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach [...] Read more.
Automated source code documentation approaches often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we describe DynamiDoc: a simple automated documentation generator based on dynamic analysis. Our representation-based approach traces the program being executed and records string representations of concrete argument values, a return value and a target object state before and after each method execution. Then, for each method, it generates documentation sentences with examples, such as “When called on [3, 1.2] with element = 3, the object changed to [1.2]”. Advantages and shortcomings of the approach are listed. We also found out that the generated sentences are substantially shorter than the methods they describe. According to our small-scale study, the majority of objects in the generated documentation have their string representations overridden, which further confirms the potential usefulness of our approach. Finally, we propose an alternative, variable-based approach that describes the values of individual member variables, rather than the state of an object as a whole. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Show Figures

Figure 1

518 KiB  
Article
On the Implementation of a Cloud-Based Computing Test Bench Environment for Prolog Systems
by Ricardo Gonçalves, Miguel Areias and Ricardo Rocha
Information 2017, 8(4), 129; https://0-doi-org.brum.beds.ac.uk/10.3390/info8040129 - 19 Oct 2017
Viewed by 4267
Abstract
Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI) software development technique. The key idea of CI is to let developers integrate their work as they produce [...] Read more.
Software testing and benchmarking are key components of the software development process. Nowadays, a good practice in large software projects is the continuous integration (CI) software development technique. The key idea of CI is to let developers integrate their work as they produce it, instead of performing the integration at the end of each software module. In this paper, we extend a previous work on a benchmark suite for the YAP Prolog system, and we propose a fully automated test bench environment for Prolog systems, named Yet Another Prolog Test Bench Environment (YAPTBE), aimed to assist developers in the development and CI of Prolog systems. YAPTBE is based on a cloud computing architecture and relies on the Jenkins framework as well as a new Jenkins plugin to manage the underlying infrastructure. We present the key design and implementation aspects of YAPTBE and show its most important features, such as its graphical user interface (GUI) and the automated process that builds and runs Prolog systems and benchmarks. Full article
(This article belongs to the Special Issue Special Issues on Languages Processing)
Show Figures

Figure 1

Back to TopTop