Semantic Aspects in Natural Language Processing

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Big Data and Augmented Intelligence".

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 2460

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computing Science, Umeå University, S-90187 Umeå, Sweden
Interests: natural language processing; semantic parsing; multimodal analysis; graph-based knowledge representation; formal languages; graph grammars; tree automata
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Natural language is the prevailing medium through which information is made available on the Internet: as text or speech, standing on its own, being accompanied by pictures or embedded in video, and coming in various forms from short, highly colloquial and often ungrammatical forum posts and tweets over commercials and scientific articles to legal texts. To make sense of this information, aggregate, search, evaluate or verify it, or present it to the user in a way that provides added value, software is needed that can make decisions based on the semantics or meaning of natural language utterances.

The aim of his Special Issue is to gather contributions—both original work and survey articles—regarding algorithms and language processing techniques for the semantic analysis of natural language, the representation of the results of such an analysis, and the further processing of these representations. Areas of interest include but are not limited to:

  • Semantic parsing and meaning representation;
  • Machine learning approaches to language understanding;
  • Semantic analysis of natural language in a multimodal context;
  • Language grounding and natural language interaction with, e.g., robots;
  • Question answering;
  • Sentiment analysis and emotion recognition;
  • Identification of bias and fake news;
  • Formal models and algorithms for any of the above.

Prof. Dr. Frank Drewes
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • natural language processing
  • semantic parsing
  • meaning representation
  • language understanding
  • language grounding
  • question answering
  • sentiment analysis
  • emotion identification
  • bias
  • fake news

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1025 KiB  
Article
Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation
by Wenbo Zhang, Xiao Li, Yating Yang, Rui Dong and Gongxu Luo
Future Internet 2020, 12(12), 215; https://0-doi-org.brum.beds.ac.uk/10.3390/fi12120215 - 27 Nov 2020
Cited by 2 | Viewed by 1961
Abstract
Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the encoder and decoder of the translation model, which greatly improves the translation quality. However, [...] Read more.
Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the encoder and decoder of the translation model, which greatly improves the translation quality. However, because of a mismatch in the number of layers, the pretrained model can only initialize part of the decoder’s parameters. In this paper, we use a layer-wise coordination transformer and a consistent pretraining translation transformer instead of a vanilla transformer as the translation model. The former has only an encoder, and the latter has an encoder and a decoder, but the encoder and decoder have exactly the same parameters. Both models can guarantee that all parameters in the translation model can be initialized by the pretrained model. Experiments on the Chinese–English and English–German datasets show that compared with the vanilla transformer baseline, our models achieve better performance with fewer parameters when the parallel corpus is small. Full article
(This article belongs to the Special Issue Semantic Aspects in Natural Language Processing)
Show Figures

Figure 1

Back to TopTop