Next Article in Journal
Economic Assessment and Control Strategy of Combined Heat and Power Employed in Centralized Domestic Hot Water Systems
Next Article in Special Issue
Education-to-Skill Mapping Using Hierarchical Classification and Transformer Neural Network
Previous Article in Journal
A Multi-Channel and Multi-Spatial Attention Convolutional Neural Network for Prostate Cancer ISUP Grading
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ontology-Based Knowledge Representation in Robotic Systems: A Survey Oriented toward Applications

Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Korea
*
Author to whom correspondence should be addressed.
Submission received: 9 March 2021 / Revised: 24 April 2021 / Accepted: 29 April 2021 / Published: 11 May 2021

Abstract

:
Knowledge representation in autonomous robots with social roles has steadily gained importance through their supportive task assistance in domestic, hospital, and industrial activities. For active assistance, these robots must process semantic knowledge to perform the task more efficiently. In this context, ontology-based knowledge representation and reasoning (KR & R) techniques appear as a powerful tool and provide sophisticated domain knowledge for processing complex robotic tasks in a real-world environment. In this article, we surveyed ontology-based semantic representation unified into the current state of robotic knowledge base systems, with our aim being three-fold: (i) to present the recent developments in ontology-based knowledge representation systems that have led to the effective solutions of real-world robotic applications; (ii) to review the selected knowledge-based systems in seven dimensions: application, idea, development tools, architecture, ontology scope, reasoning scope, and limitations; (iii) to pin-down lessons learned from the review of existing knowledge-based systems for designing better solutions and delineating research limitations that might be addressed in future studies. This survey article concludes with a discussion of future research challenges that can serve as a guide to those who are interested in working on the ontology-based semantic knowledge representation systems for autonomous robots.

1. Introduction

Ontology-based knowledge representation is significantly important for autonomous robots [1]. Autonomous robots are goal-oriented intelligent agents. Using knowledge representation (KR) in a robotic platform endows the robots with cognitive skills that enable the robots to autonomously perform a task, make decisions, and interact with a variety of environments ranging from static, structured, or fully observable to dynamic, unstructured, or partially observable [2]. Social robots are autonomous systems capable of human-robot interaction (HRI) in a socially acceptable fashion and act as assistants at workplaces and homes. Within the realm of autonomous social robotic systems [3], the demand for knowledge representation about the objects and the environment using ontologies [4] to improve the semantic understanding of the task has become a primary concern. Knowledge representation in these robotic systems through ontologies defines the link between individual instances and describes their roles in the domain [5]. An ontology is a mechanism to conceptualize the knowledge for defining formal and explicit specifications of shared concepts related to the domain entities. Ontology-based domain knowledge representation [6] increases the flexibility, re-usability [7], and adaptability of various robotic tasks (i.e., recognition [8], navigation [9], planning [10], and manipulation [11]) in different environments such as at the home, work, and public places [12]. One of the biggest challenges in the development of these knowledge-based (KB) robotic systems is to make the programming efforts feasible for combining different types of tasks, robots, and environments. To handle this, knowledge-enabled approaches based on ontology to get the semantics of the environment are adopted. The advantage of ontology-based approaches is that knowledge pieces, which are independent of the robot, task, and environment, can be shared between different robots and applied to a variety of robotic applications. In particular, the potential benefit of ontology-based semantic knowledge representation can be obtained using web-based services for knowledge sharing among different robots to perform the tasks such as RoboEarth [13], KnowRob [14], openEASE [15], RoboBrain [16], and RoboCSE [17].
In this article, we focused on a systematic review of the ontology-based KB systems of autonomous social robots that are working in domestic, hospital, and industrial sectors. In the domestic environment, an ontology-based knowledge representation shares the concepts between household robots and humans to help the robot to understand the objects in the environment [9]. Domestic environments are indeed dynamic in which robots and human users share the same space [18]. Therefore, ontologies developed [1] for domestic robots are expected to be dynamic and conceptually evolved over time when robots perceive the objects. This is one of the challenging tasks for researchers in this domain. However, it has gained much importance in recent years, and researchers have investigated systematic architectures and algorithms for automatic ontology creation in the domestic environment [19]. Service robots in hospitals require semantic knowledge representation of healthcare objects and their properties to perform a task in an unstructured environment such as tracking object locations in the healthcare system, i.e., for Alzheimer’s patients [20], and facilitating elderly people in therapy in hospitals [21]. These tasks are performed by associating semantic knowledge of shared concepts, e.g., objects with an instance in the ontology. The development of ontology-based knowledge representation has also been identified as an emerging field for surgical robotics [22]. Recently, equipping industrial robots with sufficient knowledge has become the subject of attention in the robotics community. Researchers have conceived of more ontology-based knowledge representation approaches to describe the industrial environment of robots and their goals and reasoning capabilities [23]. The idea of a complex interaction between human co-workers and robots is also one of the key ideas of the Industry 5.0 paradigm [24]. This idea is starting to materialize currently, and researchers are working in that direction [25].
The purpose of this article was to review recent research developments related to ontology-based knowledge representation systems in the domain of robotics and provide fresh insight to the readers. In this context, ten ontology-based KB systems were surveyed. For each of them, their underlying idea, architecture, development tools, as well as ontology, reasoning, and application domains are discussed. The collection of the presented knowledge-based systems was founded on the selection criteria. To the best of our knowledge, we included recent ontology-based robotic KB systems that satisfy the criteria.

1.1. Contribution

Figure 1 shows the domain and scope of our survey. It illustrates that our review was based on only those ontology-based KR & R systems that have applications in the domain of robotics, while the approaches in the domain of smart environments are beyond the scope of our survey.
To complete complex tasks in the real world, it is often necessary for the autonomous mobile robots operating in an uncertain environment to have semantic knowledge about their surroundings, tasks, and objects. Therefore, many efforts have been initiated towards the development of semantic knowledge representation in the form of ontologies to help the robotic systems reason about the world to take more robust measurements.
Researchers have surveyed the knowledge-based systems in particular with the focus on a certain task. For example, Reference [26] reviewed the knowledge base of eight systems with three typical knowledge representation languages for robot task planning and related task planners, while [27] discussed the knowledge base of nine systems in the context of finding a substitute for missing objects. Although, like the previous researchers, we also investigated knowledge representation systems (KRS) for robotics.
Compared to [28], which concentrated on the field of computer vision with special attention on object perception using hybrid methods, our review was mainly focused on the field of robotic autonomy with a special interest in ontology-based knowledge representation systems that are oriented toward knowledge-based robotic applications.
However, our current review concentrated on a broader aspect in the context of social robots [3] that perform their tasks in industry, hospitals, and homes using ontology-based knowledge representation systems (KRSs). For this, we discussed ten ontology-based KRS, shown in Table 1, used in industry, hospitals, homes, and public places.
Compared to the existing survey papers, the present review was different in the following terms:
To the best of our knowledge,
  • It facilitated novice researchers and experts to overcome the challenging task of determining and utilizing the most suitable ontology-based semantic knowledge representation system for the intended robotic application (Section 3).
  • It provided an analysis of selected KB systems from the domain of robotics, delineated the advantages, summarized the current main research trends, discussed the limitations, and outlined the possible future directions.
  • Compared to the earlier surveys, this study tended to be more concerned with the most recent work. Therefore, it provided the readers an important opportunity to advance their understanding of state-of-the-art methods.

1.2. Inclusion and Exclusion Criteria

As a subject of study, in Section 3 on the analyzed knowledge representation systems, we considered the ontology-based knowledge representation of ten systems in the domain of robotics for comparison, published from 2014 to 2020. We included only those articles that satisfied all the inclusion criteria in the scope of our review for knowledge representation (Section 3), as described in Figure 2. The articles that did not match even with a single inclusion criterion were excluded.

1.3. Survey Structure

This article is organized in a top-down manner. The entire structure of our survey with related topics and subsections is diagrammatically demonstrated in Figure 3, which provides a quick overview of the topics discussed in our survey. In Section 3, we investigate ten different ontology-based knowledge representation systems (KRS) based on seven dimensions and highlight the research gap by discussing their limitations. Consequently, in Section 4, we discuss current research challenges and future research directions. Finally, we conclude our survey with a summary in Section 6.

2. Related Works

In this section, we review the literature related to ontology-based knowledge representation for cognitive robots. Cognitive robots are autonomous systems that are intelligent agents capable of performing a task with a high degree of autonomy in different application domains.
To achieve semantic integration, ontologies work as the conceptual backbones for autonomous systems and offer several benefits such as [29]: interoperability through shared understanding of the problem domain; formalization to make shared understanding machine-processable; semantic representation to provide quality services in automatic robot systems.
A domain ontology for autonomous systems (OASys) was developed by [30] for analysis, implementation, and run-time processes. It was designed to capture the knowledge from the system, environment, and task models to exploit cognitive control loops. The ontology-based conceptual model of OASys provided the support for both the description and generic development of the engineering process. It used four ontologies: the system ontology for including the elements to define the details of autonomous system; the ASys system ontology for defining the structure, behavior, and functions; the system engineering ontology for listing the engineering process related to the system and its software; the ASys system engineering ontology for addressing the additional ontological elements related to the engineering process of the autonomous system.
In another study [31], an ontology-driven framework was designed. It was composed of OASys (as the domain ontology) and an ontology-driven engineering methodology (ODEM) for developing self-x mechanisms into autonomous robots. The OASys contained two layers [32] for representing higher and lower level ontological elements, which were used to address general and autonomous systems, respectively. The ODEM was more focused on generic engineering processes and development support for ontology-based autonomous systems [30].
Robotic systems can perform human-like tasks more flexibly if they can share knowledge among themselves and use crowdsourcing to tap human skills. One approach towards this direction is using an ontology as a cloud application. In a study, Reference [33] presented the openEASE cloud engine (https://data.openease.org) with ontologies in the kitchen domain and execution logs from three robots operating in two kitchens.
Reference [34] made a seminal contribution towards the extension of existing ontological standards through the development of behavior-, action-, and capability-related ontological concepts, which are crucial to enhance the cloud robotics domain.
There are many projects that use ontologies to represent vocabulary and knowledge acquired by robots in certain scenarios, such as kitting rehabilitation [35,36].
Knowledge modeling (KMo) using ontologies focuses on making systems intelligent [37]. Ontology development depends on information artifacts that can be distinguished according to their purpose. Reference [37] used model-based systems engineering (MBSE) as the central information artifact, which offers benefits from the conceptual to system development phase in the case of autonomous driving function. it demonstrated the possible improvements by knowledge modeling and ontologies in autonomous vehicle systems engineering. Ontologies address issues related to inconsistency and lack of domain knowledge.
Recent developments in robotics have highlighted the need for conceptual knowledge about objects in a robot’s environment to efficiently perform human-scale tasks, i.e., holding an object or closing a drawer [38]. Embodied robotic agents require additional knowledge related to body movement to perform these tasks in the real world. However, many AI action models do not provide this knowledge and only concentrate on pre-post conditions and sequences of actions. Reference [39] investigated this issue to bridge the gap by proposing an AI action model based on human psychology. It divided ontology-based knowledge representation into four levels and defined 18 actions that could be performed on objects if the pre-conditions were meet.
It is essential for a cognitive robotic system to have knowledge for representing the relation among objects with respect to human actions. Ontologies have significant importance in cognitive robotic applications that involve object identification [40,41] in the real world, e.g., the domestic environment [42]. In this direction, an ontology-driven knowledge retrieval framework was proposed [38] for providing the knowledge to a cognitive robotic system in the domestic environment. It offered a domain-dependent framework in which activities were translated as class instances through an instance generator algorithm and queries about objects were tackled using common sense reasoning. Another study [43] presented a knowledge retrieval framework in the household environment to endow the robots with cognitive capabilities for performing human-scale tasks. It used a virtual home dataset for information extraction. It obtained the knowledge about the relation between action and object by translating human activities into the class instance of the ontology. It handled the missing knowledge using semantic match-making by establishing a relation between the KB entity and the entity that does not exist in the KB. It answered the quires related to household objects through semantic match-making.
Correctly recognizing human activities [44,45] through semantic representation is a major step towards HRI. In a study [46], hand motion information and two object properties were combined for identifying human activities. This method used semantic representations for skill transfer to a humanoid robot. It was composed of three main modules related to human motion and behavior and activity demonstration by the robot. Semantic rules were generated by human observation. A new relationship between the object and activities was computed by adding new capabilities to the reasoning engine that improved the dynamic growth of the ontology-based knowledge representation.
A comprehensive study in [47] considered key decision issues and HRI challenges at the physical and cognitive levels such as human-robot interactive communication, human-robot joint actions to complete a task, and human-aware task execution planning. It addressed these issues including the combination of situation assessment based on perspective taking, affordance, inferences, and knowledge model representation for both humans and robots. It acquired the knowledge from three sources at run-time: prior knowledge from the OpenRobot ontology, which includes common sense and scenario-specific knowledge; knowledge from perception, interaction, and action planning; symbolic statements from inferences. It showed that implementing human-level semantics into robotic systems can equip them with stronger cognitive skills and lead to higher human-robot interactions.
Service robots capable of executing human-like tasks [48] need to interact with humans in a natural and efficient manner. This capability can be achieved using declarative knowledge representation which is a vital concept in cognitive science. Reference [49] applied decision making using declarative knowledge representation and exploited teacher-learner interaction in which the teacher gave instructions about the actions to be performed, rather than hand-coded explanations of the task. Interaction with the teacher using simple natural human language enabled the robot to perform the task if its execution plan halted.
Recent developments in robotics have led to a renewed interest in assistive robots for cognitive education to engage the students in their learning activities. These robots perform intelligent HRI to understand the dynamic environment and various reasoning tasks including visual question answering (VQA) [50]. In the area of pre-school education, Reference [51] presented a cloud-based VQA educational robot for presenting simple questions from scene images. It was aimed at providing meta-cognition tutoring and geometrical thinking training [52]. Its architecture was composed of robotic applications and cloud services. The robot taught the concepts of objects in a natural scene through resources recommended by a knowledge map during the interaction teaching. Three modules were designed for the cloud service: an interactive instruction control module for specifying a set of interaction strategies; a knowledge map module for providing the description and learning material of detected objects in the scene; a questioning and answering module for generating questions and answers about the objects that were recognized using the deep learning-based Faster R-CNN [53].
Recently [54,55,56,57], VQA has been tackled by incorporating external knowledge represented as subject-relation-object or visual concept-relation-attribute triplets. Reference [54] proposed a knowledge-incorporated dynamic memory network framework for retrieving the information from external knowledge bases to answer open-domain visual questions.
Reference [58] presented the solution of the unexplored VQA problem related to named entities in captured images that require real-world knowledge. It also provided the largest data set to explore VQA over large knowledge graphs.
A comprehensive study [59] divided VQA methods into four categories known as joint embedding approaches that combined deep neural networks in natural language processing and computer vision for learning the image embedding and sentences [60,61]; attention mechanisms [62,63,64,65] that concentrated on important parts of the image and/or question; compositional models; enhanced approaches that obtained external data from knowledge-based systems consisting of commonsense or encyclopedic information. It fed the image embedding and sentences to the classifier for the prediction of an answer. It enabled the system to retrieve information that was not available in visual [66,67] datasets.

3. Analysis of Knowledge Representation Systems

The KRS, discussed in Section 3, uses ontologies that make a robot more autonomous by providing it KR & R capabilities and semantic interoperability for performing the tasks with semantic understanding in human-centric environments. For this, we present ten KRSs, shown in Table 1, that use ontologies for semantic understanding of the real-world environment to perform a variety of robotic tasks in the industry, hospitals, homes, and public places.
Table 1. List of selected knowledge representation systems (KRSs).
Table 1. List of selected knowledge representation systems (KRSs).
#KR NamePublicationYearRef.
1KnowROBKnow rob 2.0: a 2nd-generation knowledge processing framework for cognition-enabled robotic agents2018[68]
2OROSUKnowledge representation applied to robotic orthopedic surgery2015[69]
3CARESSESThe CARESSES EU-Japan project: making assistive robots culturally competent2017[70]
4PMKPMK: A knowledge processing framework for autonomous robotics perception and manipulation2019[71]
5SARbotHigh-level smart decision making of a robot based on an ontology in a search and rescue scenario2019[72]
6IEQA Humanoid social robot-based approach for indoor environment quality monitoring and well-being improvement2020[73]
7Smart RulesAn integrated semantic framework for designing context-aware Internet of Robotic Things systems2018[74]
8ARBIOntology-based knowledge model for human-robot interactive services2020[75]
9Worker-cobotAn ontology-based approach to enable knowledge representation and reasoning in worker-cobot agile manufacturing2017[76]
10APRSImplementation of an ontology-based approach to enable agility in kit building applications2018[77]
The evaluation criteria are shown in Table 2, which is divided into three columns: the second column represents research questions, while their related sections are listed in the third column so that the reader can get quick access to the desired topic in a single glance.
Our article classified the knowledge representation of ten systems (shown in Table 1) according to seven research questions (given in Table 2), which are important to provide an evidence-based discussion of ongoing research towards ontology developments in the robotic domain concerning their real-world applications, architecture, design, or implementation of ontology development and reasoning capabilities. Based on the seven research questions, shown in Table 2, we discussed knowledge representation systems that use ontologies for task semantics to support robot autonomy in real-world applications.
We selected these research questions because they represent important factors that assist in the ontology selection and development process. If someone is interested in reusing an existing ontology or developing a new ontology from scratch, in both cases, the first stage is to determine the domain; the second is to identify the purpose of developing the ontology in the particular domain; the third is to decide the scope of the ontology by specifying the extent of the domain covered by the ontology; and the final stage is to select the tools for developing its architecture. Questions that should be addressed at these stages include: What will be the domain of the ontology? What will be the basic idea or purpose of using the ontology? What will be the scope or subject matter it intends to cover? Which components will be included in the design of its architecture? What tools will be used to maintain it? To answer these questions, we categorized them into seven sections (shown in Table 2), and in each section, we give the answer to one research question by analyzing the ontology-based knowledge representation of ten systems with their limitations.
In the following subsections, we analyze the robotic KRSs, shown in Table 1, in seven dimensions according to the research questions presented in Table 2.

3.1. Application Domain Scope

The first criterion, shown in Table 2, to analyze the KRSs, is fulfilled by categorizing them according to their application domain. In this subsection, we thoroughly examine the domain of each selected robotic KRS and distribute them by application. The most demanding applications of autonomous robots have gained interest in the industrial, medical, and domestic domain, in which knowledge representation provides meaning to input data that a robot uses to perform delegated tasks.
The criteria based on the domain and application scope are listed in Table 3.
The KnowRob [68] KRS was designed to perform household manipulation tasks, particularly in an assistive kitchen project. Its framework is reusable on a wide range of robotic platforms. Additionally, it is a software product that is developed by the RoboEarth [13] project for creating the World Wide Web of robotics with the aim of downloading the instruction set to perform a task and upload the learned model for sharing it with other robots. It enables the robots to perform goal-oriented manipulation tasks.
Starting from [78], OROSU [69] maps the robotic ontologies in the medical domain with applications to a surgical procedure, inspired by [78] for hip resurfacing.
CARESSES [70] targets the domain of assistive robots for elderly support. It builds an interaction between a culturally competent assistive robot and an elderly person for performing everyday tasks.
PMK [71] was developed to enhance robotic capabilities for performing manipulation tasks and motion planning (TAMP) in a wide range of scenarios such as serving a cup in an indoor environment.
SARbot [72] introduces robotic application in the disaster search and rescue (SAR) domain. It represents the knowledge using an ontology that enables the robot to better understand the environment for search and rescue operations and o make smart decisions using the ontology-based task planning algorithm.
IEQ [73] employs a knowledge representation approach that enables social humanoid robots to have cognitive capabilities to interact with the people in the building and provides them with suggestions for real-time monitoring of IEQ.
The KRS of SmartRules [74] for the Internet of Robotic Things (IoRT) systems was developed during the SemBySem and Web of Objects projects. It shows effectiveness in ambient assisted living (AAL) applications using the semantic IoRT system to help elderly people who suffer due to perceptual and mobility impairments by providing them assistance through monitoring their daily living activities. Its use cases involve medicine reminders, fall detection, emergency confirmation, activity recognition for food preparation, and providing dietary advice.
The application of ARBI [75] involves a social robot Turtlebot2 that uses a scalable knowledge model and acts as a medical receptionist in a hospital. The social robot performs greetings and guides the users to the required department after understanding their symptoms through knowledge inference.
The worker-cobot [76] targets industry to establish collaboration between human workers and industrial robots (IRs) in a shared environment. It represents the knowledge of all components in a shared manufacturing work cell, in a form that is understandable for both human workers and IRs. Its application provides manufacturing knowledge representation, sharing, and reasoning in the domain of the cooperative work cell, which includes the cobot, co-workers, and production components.
The KRS of APRS introduces robot agility in the kitting domain [77]. Its integrated agility allows manufacturers to empower their robotic systems with more automatic part customization.

3.2. Idea and Contribution

Focusing on the semantic frameworks for robots that go beyond the local knowledge bases for building a knowledge-enabled cloud-based system, KnowRob 2.0 [68], an extension of [14], is considered the most advanced knowledge representation and reasoning system, which relies on ontologies and semantic web technologies. Its core purpose is to integrate the physics simulation-based reasoning and photo-realistic rendering techniques of the game engine into a hybrid knowledge processing architecture for successfully mastering the autonomous robotic agents to perform complex manipulation tasks with additional cognitive capabilities in a human-centric environment.
The main purpose of the OROSU [69] framework is to develop a KBS for human surgeries using robotic assistance by integrating healthcare ontologies and robotic systems. It also aims at tracking the robotic actions and adopting robot pose in drilling tasks during medical surgical procedures.
The idea of CARESSES [70] is to define a system that endows the autonomous robot with communication skills through speech and gesture recognition including perception abilities for semantic recognition, as well as the cultural competence to determine the robot’s behavior. It also aims at enabling the robot to adapt to the culture of the individual for perceiving, acting, reasoning, planning, and decision making based on the culturally aware capabilities.
PMK [71] contributes to both knowledge formulation and reasoning. The purpose of the PMK knowledge base reasoning framework is to formalize and standardize the ontological representation that contains low-level semantic data for robot perception and high-level information of the environment for manipulation tasks. The current version of PMK [71] aims at including the sensing information, semantic reasoning to handle the differences, standardized ontological concepts, and geometric and spatial reasoning knowledge for the combined task with motion planning, which was not covered in its preliminary version [79].
The main purpose of developing SARbot [72] is to perform complex robotic tasks in disaster search and rescue scenarios that involve exploring an unknown environment, locating the victims, and sending back their information immediately [80,81]. It emphasizes high-level control, supports the decision making of the robot using the ontology, and efficiently updates the task knowledge.
IEQ is vital to ensure the well-being and comfort [82,83] of the people working or living inside a building. To overcome these challenges, the KRS of IEQ monitoring [73] aims to address three types of comfort: visual, thermal, and acoustic, following six norms for monitoring IEQ using a social robot. It mainly contributes to the development of the IEQ ontology, the definition of normative standards, and algorithm compliance reasoning. It finally implements this on an interactive social robotic platform that gives appropriate suggestions after evaluation according to the preferences of the individuals and the integration of actual IEQ data with the post occupancy evaluation (POE) [84] survey information obtained from the person.
The concept of the semantic IoRT introduced in SmartRules [74] presents a semantic framework for context-aware IoRT systems concentrating on monitoring and assisting an elderly person during his/her daily living activities. It generates the actions based on contextual information. The purpose of the SmartRules framework is to integrate the heterogeneous data and provide context awareness to determine the robot’s reaction to context changes using symbolic representation and reasoning with first-order logic. It aims to develop a knowledge representation framework known as SmartRules, an operational platform for context awareness focusing on manageable objects (MOs) with minimal code.
The ARBI [75] framework aims to propose an integrated knowledge model for human-robot interactive services that is based on the ontology. It clearly defines common concepts, as well as domain knowledge for better understanding of agents and environmental information. It focuses on symbolic representation, which makes the knowledge model independent of the robotic system.
The worker-cobot [76] framework overcomes the challenges of continuously changing production components, environmental complexity, and shared knowledge representation for cooperative manufacturing. It provides a distributed control solution, which includes an ontology-based multi-agent system (MAS) and a business rule management system (BRMS) to achieve agile manufacturing in a worker–cobot cooperative work cell and to manufacture a customized product.
The purpose of the APRS [77] kitting project is to empower the manufacturing robot with agility in the kit-building process using ontology-based information representation models. It contributes to making the industrial robots more agile for handling the challenges faced by small and medium manufacturers.
Table 4 summarizes the key ideas adopted by each robotic KRS.

3.3. Development Tools

KnowRob [68] has been implemented in a modular way, which allows the users to easily reuse it by adding, removing, or exchanging the required parts of its functionality. Each of its modules can be extended in two ways: first is an additional knowledge-based extension of the ontology, and second is an additional reasoning-based extension language. KnowRob uses the combination of two languages: Web Ontology Language (OWL) [85] and SWI-Prolog [86]. OWL is used to encode ontologies and describe the relational knowledge such as the concepts that are semantically interconnected. It generates XML-like files. SWI-Prolog is used to load, store, and reason on the explicit knowledge in the memory that contains the facts represented in OWL. It manages the querying of RDF triples.
The initial guidelines for OROSU’s [69] implementation were presented in [78]. It was developed in Protégé and OWL. Its task definition process is implemented using Protégé, and it contains tools included in the KnowRob framework.
CARESSES [70] used the OWL language for describing the ontology, coupled with a Bayesian network to describe cultural knowledge, in a probabilistic sense for avoiding a high risk of stereotyping when modeling the cultures in its KB.
The PMK [71] was implemented using the Protégé ontology editor and OWL. Queries are created in SWI-Prolog, and its semantic web library loads ontologies represented in OWL using Prolog predicates. These predicates obtain the knowledge required by the robot to perform object manipulation tasks. Its perception module is used to fix and attach 2D cameras with two Robot Operating System (ROS) [87] nodes and a C + + library a r _ t r a c k _ a l v a r object pose detection.
SARbot [72] uses OWL for ontology development and adopts the Semantic Web Rule Language (SWRL) to describe complex rules in a semantic way and to overcome the limitations of OWL. It uses JESS, which is a Java-based clip reasoner, while it defines task preconditions and atomic actions with data properties using Protégé. The experiments were conducted using TurtleBot3 with a real robot platform established in ROS.
IEQ monitoring [73] implements the normative reasoner [88], which is a Java-based interpreter of the AgentSpeak Language [89] for BDI (beliefs-desires-intentions) agents (i.e., robots).
The SmartRules framework in [74] allows IoRT systems to use and react according to the contextual knowledge and abstract the heterogeneity of the MOs. These SmartRules were developed using two languages in which context management rules are defined by the SmartRules sub-language. μ -Concept is a language used in the ontology-based knowledge representation system [90] to model the context, in particular objects and actions between them in the IoRT environment. The μ -Concept language contains three main constructs called concepts, properties, and individuals. It specifies the IoRT environmental ontology with entities such as physical objects and actions, and its syntax is based on the RDFS XML schema. These two sub-languages provide efficient reasoning on MOs to generate high-level contextual information or actions. SmartRules’ sub-language specifies context management rules that allow the reasoner to perform the inference process, known as matching. It is composed of if-then production rules in which μ -Concept constructs are used as predicates that are stored in variables. Compared to OWL, which uses the open world assumption (OWA) to reason, both SmartRules and μ -Concept use the closed world assumption (CWA) for reasoning.
ARBI [75] uses SPARQL statements and Apache Jena (https://jena.apache.org) to implement the knowledge query and inference functions along with ROS-based actuator functioning. Its context reasoner is based on Prolog, while the preconditions for a certain action are described as OWL-based ontological restrictions on property.
In worker-cobot [76], the Holonic Control Architecture (HCA) was developed, which was implemented via the Java Agent Development Environment (JADE) [91], while it used the ontology-based Agent Communication Language (ACL) [92] for message exchange between JADE agents.
The models in the APRS kitting [77] framework were defined in the XML Schema Definition Language (XSDL) and the Web Ontology Language (OWL).
The development tools used in each KR system are listed in Table 5.

3.4. Architecture

In this subsection, we give a review of the architecture for each selected KRS, with a discussion of their main components, summarized in Table 6.
The KnowRob 2.0 [68] architecture consists of the interface shell, logic-based language, and hybrid reasoning shell, which are centered on the symbolic representation of the ontology. It operates within the perception-action loop of the control system in the embodiment of the robotic agent: the interface shell contains question answering, perception, learning, and recorded episodic memory. It uniforms the query answering system to execute the assigned task by referring to the captured image data, observing the state of the object, and inferring the appropriate motion parameters. The queries related to the manipulation tasks require parameterization of the motion and data structure along with the function calls. The logic-based language facilitates a hybrid reasoning shell by providing descriptions for entities such as objects, their parts, and environments. It also uses control-level data structures for grounding the symbolic expressions and provides ontologies-based heterogeneous representations. The hybrid reasoning shell implements the knowledge using multiple methods including key components that are data structures, robotics algorithms for inverse kinematics, and collision-free motion planning for finding accurate paths to the precise position of the object.
The OROSU [69] architecture is based on the combination of multiple ontologies, while its base ontology partially relies on the 1872–2015 IEEE Standard Ontologies for Robotics and Automation [93]. It is among the first frameworks that merges robotic and medical ontologies for surgical robotics. The CARESSES [70] architecture consists of three main modules: the cultural knowledge base (CKB), the culturally sensitive planning and execution, and the culture-aware human-robot interaction, which are integrated into universAAL [94]. The CKB is the core of its framework architecture.
The PMK [71] architecture consists of four major components known as the perception module, PMK framework, and TAMP planning and execution module. The perception module detects the world entities using tags, and then, their identified poses and IDs are used for building IOC knowledge. The PMK framework offers reasoning predicates for perception and object features along with situation analysis. The TAMP module is used to plan the task by the fast-forward task planner [95] and combines it with physics-based motion planning [96]. Finally, the execution module performs the assigned task.
The SARbot [72] architecture divides the robot control into three levels. The motors and sensors are controlled by the low-level control, while environment perception, SLAM, and navigation are handled by the middle-level control, and smart decisions of the robot are made by the high-level control. The IEQ [73] architecture for providing suggestions by human-robot interaction consists of seven major elements: the knowledge base (KB), which is the central storage for all the robot’s knowledge containing the current instance of the domain ontology that is related to the current monitoring session; the dialog and speech recognition module; the suggestion module; the Light_Sound data acquisition module. The Therm_Hygrometric data acquisition module uses web services for acquiring the mean and standard deviation of humidity and temperature values, while illuminance and sound pressure are obtained from the user.
The SmartRules [74] architecture for the semantic IoRT system contains two software layers. The lower abstract layer is a semantic façade that provides an interface for handling the heterogeneity of low-level components and abstracts access to MOs by transforming heterogeneous sensor data. The top reasoning layer, the reactive reasoning core (RRC), processes the abstraction layer information, performs reasoning, and generates actions, which are converted into commands for the lower layer, completing the perceive-plan-act cycle. Its model is written in the database, while its function is divided among three sub-modules known as the ontology (explained in Section 3.5), consistency checking, and reasoning module (discussed in Section 3.6). The consistency checking module monitors the stability of the model considering constraints given in the ontology.
The ARBI [75] architecture uses an ontology-based knowledge model, and it is composed of a task planner, which is based on the JAM architecture [96], a context reasoner, and a knowledge manager. The knowledge manager processes the agents’ queries through inference and matches name spaces for storing recognition and dialog data in the knowledge model. It employs a dialog management system and a perception and action engine. It uses the dialog manager to query the knowledge manager about the topic of the user’s dialog for generating the final statement of the dialog by applying a situational knowledge response.
The worker-cobot [76] architecture presents agile manufacturing using three main steps. At the first step, HCA is used to ensure the autonomy of the cooperative manufacturing system. A common language for representing the shared environment of workers and the cobot is also specified at this level. In the second step, knowledge exchange in the worker’s and robot’s understandable form is performed. The third step is to reason about the production demands and the cooperative work cell’s status for obtaining agile manufacturing.
The architecture of APRS kitting [77] for the domain of kit building consists of three models: The first model represents a kitting workstation with the objects (e.g., parts) and data (e.g., poses). The action model represents the structural components for the automatic generation of the Planning Domain Definition Language (PDDL) domain [97]. The robot capability model enables the robot to perform the specific action on one or more objects. It also contains pointers to the new and kitting workstation model’s elements, as well as a pointer to the robot element and assembly action by the robot.

3.5. Ontology Scope

This subsection discusses the scope of each KRS based on the ontology in the domain of robotics. An ontology is best defined as a conceptualization-based formal representation of knowledge that includes the explicit specification of concepts and objects, other entities presented in the environment and relationships among them, as well as tasks and actions. We divide this section into two parts. The first part gives an overview of the ontologies used for each selected KR system and summarizes them in Table 7. The second part deals with classifying KR systems according to their major ontological components that include the object (Section 3.5.1), the map of the environment (Section 3.5.2) and the task and action (Section 3.5.3), which are summarized in Table 8.
Starting from the overview of the ontologies, KnowRob [68] uses multiple ontologies to allow communication for information sharing among the agents. Its main ontology states robotic agents, their connected body parts, tasks, actions, behavior, objects with their parts, and the environment. It provides relevant background knowledge in the form of rules with additional axioms. Its ontological knowledge representation approach consists of four main knowledge bases in which the inner world knowledge base contains CAD and mesh models of real-world objects with physics simulation; the virtual knowledge base from the data structure is computed on demand; the logic knowledge base abstracts the data from sensors, control commands, and events such as grasping; while the recording of these events and experiences of the robotic agents is stored in the episodic memory knowledge base.
The OROSU [69] ontology uses the knowledge from the CORA ontology [93] for defining the general concepts of medicine, human anatomical ontology, SNOMED-CT [98], and the KNOWROB framework [14,99]. It adopts the robot parts (PARTS) ontology along with its interaction with the suggested upper merged ontology (SUMO). The extensive vocabulary and flexibility of SUMO make it a top-level ontology [100].
CARESSES [70] provides human activity recognition and presents cultural information using ontologies. Its knowledge is contained in a modular ontology structure that combines bottom-up and top-down approaches. Its ontology for culturally competent robots relies on three layers with four major elements, TBox, CBox, PBox, and algorithm. The culture-generic knowledge layer contains the terminology (TBox, I) for representing information related to the robot, goal, action, and the grounding of values for all the cultures of the world, considered in KB. The culture-specific settings layer holds the assertions (CS-ABox, II) for representing information to depict the cultural background of the individual, which the robot can use when certain information is not available. The person-specific settings layer keeps the assertions (PS-ABox, III) for representing the cultural identity, environment, and preferences of the assisted person that specify the appropriate behavior of the robot. The assessment and adaptation (A&A) algorithm identifies person-specific settings through dialog or observation. The terminology box (TBox) of the ontology is composed of classes and the general properties of the domain concepts such as data and object properties.
A preliminary version [79] of PMK [71] was inspired by [4] for the indoor robot navigation task. It defines concepts by a hierarchy of ontologies with three layers, called the meta ontology, which describes concepts related to a physical object, the ontology schema, which represents domain-specific knowledge, and the ontology instance for storing information about a specific object such as object features. It follows the hierarchical schema to perform manipulation tasks using seven classes of three layers.
SARbot [72] divides the ontology into two layers: the conceptual layer for knowledge representation of the disaster rescue domain and the instance layer for the relationship among instances. The conceptual layer is further divided into abstract and specific layers. The abstract layer contains concepts related to the S L A M , o b j e c t , and t a s k classes, while the specific layer describes certain environment objects. Three main parts of its ontology are entity ontology, environment ontology, and task ontology. The modules of these three ontologies are combined to make a joint ontology. For efficient task execution, it allows the robot to query and match the ontology at each layer according to the initial state. IEQ [73] represents the IEQ ontology, which focuses on the concepts related to physical environmental parameters of indoor areas, user’s features, their perceptions, and preferences about the environment. The IEQ ontology consists of three parts, which concentrate on three main concepts known as occupant, environment, and recommendation. Occupant elements contain the attributes and relationships of the user (i.e., person), features (i.e., age, gender), user perception (i.e., heat), and preferences (i.e., temperature, illuminance level) about the environment to allow the robot to understand the comfortable conditions for the occupants. SmartRules [74] contains domain ontology to create context models for IoRT scenarios. It introduces the micro ontology for modeling the concepts of semantic IoRT applications in indoor environments, spatial relations, sensors, and observations. Its top level includes axiomatically simpler concepts from the DOLCE Ultra Lite (DUL) ontology and the semantic sensors network (SSN) ontology.
ARBI [75] proposes a generic knowledge model known as the intelligent service robot ontology (ISRO). Its high-level scheme allows dynamic generation and basic knowledge management, which enables it to be implemented on any service robot independent of a specific domain. The low-level information and symbolic knowledge provide support for concretization and abstraction through semantic relationships. It achieves flexibility and scalability by defining upper-level concepts. Environment, perception, user, action, and robot are the main ontologies. Since the ARBI framework has been verified through its real-world implementation in a medical reception situation, its environment ontology also has medical domain knowledge.
Worker-cobot [76] provides an ontology-based multi-agent system (MAS). It uses ontologies for the communication of autonomous agents to customize the pumps in a collaborative work cell consisting of two workers and one cobot. The cobot’s task is to pick the right amount of production parts from storage and place them in the workers’ workstation based on their manufacturing status, while the workers’ task is to assemble the product parts. Its main ontology is the agent.
APRS [77] has implemented three ontologies in the kitting domain known as the kitting workstation model, the action model, and the robot capability model.

3.5.1. Object

Most of the compared KR systems define the Object from various perspectives as an essential component of their ontologies. KnowRob [99] defines the “Object”in a large ontology from very generic classes such as SpatialThing to specific ones, e.g., Refrigerator–Freezer. It contains about 7000 object classes [101]. In addition to this, KnowRob 2.0 [68] considers physical, social, and mental entities as objects participating in an event. OROSU [69] takes the basic Object definition from the IEEE suggested upper merged ontology (SUMO) (https://en.wikipedia.org/wiki/Suggested_Upper_Merged_Ontology). We could not find the object definition in CARESSES [70]; however, it is defined as a sub-class of Topic. PMK [71] defines the objects in WSObjectClass and their features in the Features class. Its main WSObjectClass uses the standardized concepts of SUMO: Artifact for object and Collections for group-level, while the new concept ArtificatComponent for component-level. SARbot [72] defines the concept of environment objects (e.g., victims, bookshelves, etc.) represented as individual classes. It also contains the objects that may be initially unknown for the robot, but can be recognized to update the environment ontology. IEQ [73] does not describe the concept of objects in natural language. However, it shows the ontological concepts and relations of logical sentences about objects. SmartRules [74] concentrates on the notion of manageable objects (MOs). It is comprised of three subcategories of object classes: active objects that include physical objects; virtual objects, i.e., databases, files, web services; static objects and movable objects. ARBI [75] defines different objects ranging from object connection, components (e.g., hinges, joints, etc.), to human-scale objects (e.g., documents, table) in the perception ontology and their attributes such as size, color, state (static or dynamic, open or closed), shape, and other features. Worker-cobot [76] does not provide an object definition. However, it uses Drools [102] for pattern matching with the ReeOO algorithm [103] by combining it with object-oriented (OO) concepts such as abstraction, inheritance, and encapsulation. Its agent descriptive ontology indicates different objects and entities of the shared domain (i.e., electric motor, worker, robot). APRS [77] describes solid objects (e.g., parts) with general information related to their basic elements. Its PointType class stores dynamic information, while the PartType class holds both static and dynamic information about parts of the object. A finished kit becomes the instance of the PartsTrayType class.

3.5.2. Map of Environment

KnowRob [68] does not describe the natural language definition of a map of the environment. However, it contains the concept of SemanticEnvironmentMap as a sub-class of Map. OROUS [69] defines places in environments where the robot performs the task (e.g., OperatingRoom, CTRoom, EngineeringRoom) as sub-classes of Room. PMK [71] contains Region, Physical Environment (environment topology), and SemanticEnvironment subclasses of Workspace. SARBot [72] uses the environment ontology to give a description of the environment map. It contains two classes: object class and SLAM class, which builds a semantic map for locating the victim in a room (Room1, Room2). It also presents geometric categories of the grid map that show the detected object in the obstacle area; the detected object is in the unknown area; the detection point is in the free area near the obstacles; the detection point is in the free area (no obstacles nearby). In IEQ [73], concept of indoor environment is used to detect non-compliant situations. However, it does not contain a natural language definition of environment. SmartRules [74] defines specific concepts related to the internal and external environments; for instance, Room Region, Living Room Region, and Garden Region as a subconcept of Place in the DOLCE Ultra Lite (DUL) ontology (http://ontologydesignpatterns.org/wiki/Ontology:DOLCE+DnS_Ultralite). ARBI [75] semantically defines a knowledge representation that is constructed as general knowledge related to OpenCyc. Its environment ontology is composed of the concepts for places, temporal things, map information, and domain-specific knowledge. Places are defined as spatial things (a hallway, building, street). The environment description includes spatial relations between places, the scale, and the central point of the place in the robot’s map.

3.5.3. Task and Action

KnowRob [99] defines Action as an Event using the definitions of the OpenCyc ontology. The KnowRob [68] knowledge base system provides the basis for the autonomous mobile robot by representing general class knowledge in the form of the ontology for task execution. It uses the well-structured foundational ontology, called DUL, to execute a task by following the “Plan” definition of the DUL ontology and defines many activity-related concepts in the Open-EASE ontology. OROUS [69] contains sub-actions composed of the task. For instance, sub-actions USimagePerception and RoboticBoneTracking are performed in OperatingRoom. It also defines actions related to medical perception and algorithms and pre- and intra-operative actions in the MedicalSensing and manipulationAction classes. However, the Action definition in natural language is described in a few analyzed approaches (e.g., CARESSES [70]). PMK [71] defines the Action class in which symbolic tasks are described at three levels (Task): Level 2 (Sub-Task) includes short-term sequences of move, pickup, and place, while Level 1 contains AutomicFunction for perception and planning processes. SARBot [72] uses the TaskOntology for search and rescue operations in which the task contains a subclass of the subtasks and atomic actions; for instance, SearchTask, RescueTask, and RecognizeTask are three tasks of TaskOntology in which RescueTask has three atomic actions: PickUpVictim, GetOutVictim, and PutDownVictim. In IEQ [73], the recommendation ontology models the concepts and relationship for IEQ evaluation based on comfort standards and helps the robot to complete its task by giving suitable recommendations to the user. The SmartRules [74] abstracts the information from lower level visual data into high-level entities for performing context-aware actions. It proposes _VisuaLMessage, _MoveRobot, and _EmergencyAlarm to model the actions in the case of context change. The Action ontology of ARBI [75] expresses the robot’s actions, e.g., moving, grabbing, and speaking in service situations and inter-relations with other models for generating complex knowledge that assists the robot in action selection and event management. The actionontology also contains an event class that stores both recordable, as well as cognitive events and external recognized data in the form of event instances
Worker-cobot [76] uses the agent ontology, which is divided into three categories to perform a task. The agent structural ontology expresses the relations (i.e., predicates) between the entities of HCA and facilitates better understanding, while the agent administrative ontology represents the operations or actions performed by the HCA entities (i.e., pump assembly, pick and place). We found only the name of ontologies related to tasks and actions, but the description of their classes was not given in [76]. APRS [77] presents an action ontology for all the concepts required to support an action. It contains information for the automatic generation of the PDDL domain file and problem files required by the planner. The capability ontology represents the information related to the robot’s capability such as performing assembly actions (i.e., pick-up-part, small-gear).

3.6. Reasoning Scope

In this section, we compare the reasoning scope of the ontology-based selected KRSs listed in Table 9. Reasoning endows the robots with cognitive capabilities (shown in Table 8) so that they can perform the task (Section 3.6.3) autonomously and interact through visual (Section 3.6.1) and voice (Section 3.6.2) recognition skills, in real-world environments.

3.6.1. Interaction Based on Visual Recognition

KnowRob [68] recognizes high-level activities by acquiring knowledge from observations. In a study, Reference [104] used KnowRob [99] to present and reason about the knowledge of the detected object by linking the object recognition output with the mapping system. CARESSES [70] interprets visual data acquired by the robot’s sensor in the light of cultural knowledge. It also equips the robot with perceptual capabilities that include human emotion estimation (e.g., happiness, anger, etc.) and daily activity recognition (e.g., sitting, cleaning, etc.) The PMK [71] detects the IDs of world entities and performs pose estimation using tags. It builds domain knowledge by asserting perceptual data on PMK. Then, reasoning for perception is performed, which is related to algorithms and sensors, to respond to the queries. It extracts object features (e.g., color and dimensions) and provides reasoning on features. It also contains reasoning for situation analysis to evaluate the spatial relation of objects with others and generate agent-object relations that are used later for the planning process. SARbot [72] enables the robot to have quick object recognition capabilities using QR codes, using a camera to scan the QR code of the target and detect it during research and rescue operations. It uses Bayesian reasoning to obtain the position of victims and semantic information by QR code recognition. SmartRules [74] performs recognition tasks related to user detection, activity recognition, and fall detection. It recognizes the user’s activities while cooking or preparing a meal and gives dietary advice. ARBI [75] supports user recognition, which involves face detection, emotion estimation, and gender detection. The low-level extracted data are fastened to the knowledge. In addition to this, it also performs place recognition to identify the location where the interaction occurs. Worker-cobot [76] enables the robot co-worker to achieve cooperative manufacturing through worker activity recognition, which is an essential step towards physical communication between human and robot. The two-fold benefits of recognition are: to help the robot acquire the worker status; and to help the worker establish physical communication with the co-bot during the manufacturing process. The recognition helps to perform a cooperative task more efficiently. The robot recognizes the object’s picking and placing activity by the worker by detecting the natural body movements of worker and identifies the operation performed by him/her through the specific tools he/she is holding. In this scope, a prior work [105] of [76] added more flexibility by identifying worker hand gestures to communicate with the co-bot by defining two modes of gesture recognition. The explicit gesture recognition mode recognizes the gesture when a worker is directly commanding the co-bot, while the implicit gesture recognition detects specific gestures that notify the robot to perform a sequence of actions. The study in [106] represented different hand gestures used by the worker to assist the human and co-bot intuitive communication. APRS [77] updates the current state of the world by sensor processing (SP). It enables the robot to have object recognition capability for picking up and gripping the object. It also includes pose estimation of objects to be gripped. A failure in the context of task identification is recognized from the visual recognition system.

3.6.2. Interaction Based on Voice Recognition

CARESSES [70] introduced reasoning based on cultural KB for efficient human-robot interaction and intelligent verbal communication with semantic comprehension to recognize relevant keywords. In the context of CARESSES, References [107,108] introduced two human-robot speech-based interaction scenarios involving cultural knowledge-based assumptions. IEQ [73] endows the humanoid social robot with communication skills to interact with humans using dialog and speech recognition modules. The dialogue module enables the social robot Nao to communicate with the people living in an indoor environment and obtains their perception and preference about the environment through POE questionnaire. The speech recognition module uses Google speech recognition (https://venturebeat.com/2015/05/28/google-says-its-speech-recognition-technology-now-has-only-an-8-word-error-rate/, https://en.wikipedia.org/wiki/GOOG-411) and the Speech-to Text (https://cloud.google.com/speech-to-text/docs/libraries) API service for converting recorded audio into text. It also allows the robot to perform several complex interactions related to the indoor environment of a building and its occupants such as building features (e.g., window size, etc.), user’s features, preferences, and user’s perception about the environment. ARBI [75] provides human-robot interactive services with speech recognition and improves the natural understanding of user’s utterances by grounding predicate information, which extracts user’s utterances through the dialog management system. It handles the word-matching issue of grounding the non-pronoun words using the Google word2vec model.

3.6.3. Task Execution and Action Planning

Cognitive capabilities in a robot enable it to interpret task execution and actions. For this, a proper representation of the actions to perform specific tasks by the robot in the environment is essential, which has been included in most of the compared KR systems.
KnowRob [68] uses motion control reasoning to provide tight coupling of action representation and executable motion descriptions for attaining competence in a manipulation task. It performs simulation-based reasoning to envisage the results of motion parametrization and plans for predicting the precise course of action. It avoids the frame problem, and compared to the logic like reasoning, it also has the advantage of capturing physical behavior. Besides, its knowledge representation about collisions and gravity is more generic than rules or manual knowledge. OROSU [69] is designed to track action execution during orthopedic surgery. OROSU [69] uses HermiT and Pellet reasoners to perform reasoning on the actions in the real scenario for performing surgical procedures, e.g., hip surgery. It uses Pellet to obtain the query output of the defined action, e.g., o b t a i n 3 D f e m u r M o d e l F r o m U S and R o b o t i c B o n e T r a c k i n g for performing the tasks in the O p e r a t i n g R o o m . It accomplishes more complex queries using the HermiT reasoner, e.g., to obtain a 3D model from US images or robotic bone tracking to perform the task in the operating room. CARESSES [70] enables the robot to match its behavior with the cultural identity of the user, which helps the robot to form its plan and execute the actions in a culturally aware manner for making appropriate decisions to achieve the goal. It uses constraint-based approaches [109] for plan generation with a focus on cultural constraints and human-aware planning [110]. In the same direction, Reference [111] used a constraint-based planner with cultural sensitivity [112] to execute the robot’s actions and maintain the states of the environment and people. PMK [71] uses a fast-forward task-planning [95] approach combined with physics-based motion planning [96] in the TAMP module, which facilitates task planning and execution by providing a sequence of actions to the robot. PMK uses geometric reasoning, dynamic interactions, along with manipulation constraints to support this process. Its execution module performs the serving task. SARbot [72] uses the JESS reasoner, which is a Java-based CLIPS reasoner [113]. It first reasons about the task according to SWRL rules and actual knowledge, then it performs the task by breaking it into automatic actions. It allows the robot to find the victim’s location through inference and semantic information, while task reasoning is performed with JESS. IEQ [73] enables the robot to execute human well-being tasks for visual, thermal, and acoustic comfort in the indoor environment. The Light_Sound data acquisition module obtains and sends user’s responses to the normative reasoner. The normative reasoner performs compliance evaluation with normative standards. Consequently, the robot gives suggestions to the user based on the compliance results and user’s perceptions and preferences to solve possible discomfort in the indoor environment. The reasoning module in SmartRules [74] executes actions based on contextual information. It allows reactive reasoning based on the closed-world assumption (CWA) and starts the reasoning process by converting SmartRules into the Drools [114] inference engine. It uses RRC to process information from the abstraction layer and generates actions after the reasoning process. Then, the platform decodes these actions into actual commands that are sent to the actuators. In the context of a person’s presence, but not near a display MO, an action is triggered by RRC to move the service robot to the person’s location. It observes the person’s presence in its internal map and moves to a position (x, y), and this enables the robot to perform the tasks such as asking the person if he/she needs any information or reminding the person to take medicine. If a fall is detected, RRC executes an alarm action to notify other family members. In ARBI [75], actions range from physical behavior to complex tasks that involve guiding and reasoning. It implements a query interface consisting of four essential KB functions and ten knowledge query protocols. It uses the context reasoner to infer spatial and temporal information and stores the important ones in the knowledge base as static information. Worker-cobot [76] takes cognitive actions in the assembly task distribution among the workers. If workers have assembled an equal number of products, then Drools starts the first task assignment operation randomly and keeps switching if this situation happens again. It reasons about the interaction between the order holon (OH) and operational research holons (ORHs) using a low-cost algorithm that is written in Drools. Drools obtains the balance in the assembly task distribution among the workers by following the business rules management system (BRMS). APRS [77] enables the robot to perform the kitting process when the operator receives a command to start manufacturing activity. It then requests the task scheduler to act. The executor translates actions from the plan and continuously tracks the exaction status. Its PickUpActionType is a complex action that requires the ability to identify an object, measure the grip force, and determine whether the object is held firmly. If a failure occurs due to an object dropping in the kitting process, the executor aborts the plan and sends a status message to the operator for re-planning. The assembly action brings components together to the assembly station in kits, which contain the necessary parts (e.g., small gears) to complete the object assembly process.

3.7. Limitations/Challenges

We examined the ontology-based knowledge representation systems that have been implemented successfully in autonomous robots to perform tasks in industrial, domestic, and hospital environments. However, each robotic KRS has its drawback, described in Table 10. In addition to these limitations, an important issue in these KRSs is the lack of a standard for developing the ontology to represent the knowledge. Another common problem that needs to be addressed is accessibility. KRSs should be developed such that they can be easily accessed by developers all over the world for implementing in robotic applications.

4. Summary

In this section, we summarize the most relevant findings of knowledge representation systems, reviewed in our survey (Section 3), in which ontologies have been used as a knowledge artifact. We investigated the approaches that enable the robots to have semantic skills. For this, ontology-based knowledge representation approaches that endow the robots with semantic understanding were explored and evaluated based on seven research questions and criteria. The first criterion concentrates on the application domain of the selected robotic KRS in which six robotic KRSs used for domestic tasks, two developed for medical robotics, while the remaining two designed for industrial robotics have been explored. The other three criteria discuss the key idea and contribution, development tools, and architecture. The fifth criterion is concerned with the ontological scope. KnowRob, which is the first one, uses ontologies for KRS and is in fact well documented; meanwhile, other KR systems are more recent. OROSU is the first KRS that integrates robotic and surgical ontologies; however, it is not fully accessible and lacks the definitions to understand the terms. CARESSES’s cultural KB contains hand-crafted ontologies that tend to be inflexible and complex, which needs to be addressed to make it more effective and efficient in sharing and porting of knowledge in the cultural domain. The sixth criterion is related to the reasoning scope of the ontologies for each robotic KRS, which demonstrates the support of the cognitive capabilities of the robot. Our review showed that ontology-based knowledge representation and reasoning endow the robots with a wide range of cognitive capabilities (e.g., including perceptional capabilities to estimate human emotions, recognize object and places, detect human actions and activities, planning and performing search and rescue operations). Based on the prior studies in this article, it can be reported that the efforts in these KRSs should be continued with the latest updates for reuse, and KRSs should be extended with more options to implement in new applications. In addition, more work is also required with the standard definitions and proper documentation of KRSs.

5. Discussion and Future Research Directions

The comparison of the ontology-based KR systems in this article did not cover all the ontological components and cognitive capabilities of a KR-based robotic system, which is natural. To our knowledge, there is no fixed or definitive method that could completely address all the aspects of these kinds of ontologies in a review. However, considering their important aspects in our survey, we presented the summary of our evaluation results (described in Table 8 and Table 11) for the knowledge base of ten systems in the robotic domain. Hence, the goal of our survey was to review novel, yet valuable recent research developments in ontology-based KR systems for roboticists.
The analysis of ontology-based KR systems led us to the following potential areas for future research directions.
  • The researchers are more focused on the development of ontologies in the field of knowledge representation, while the development of the mechanism lags.
  • In the future, more research is needed towards the standardization and efficient implementation of ontology-based knowledge representation systems.
  • We believe that along with the ontologies, future studies should also aim at the development of efficient queries and reasoning mechanisms that might be applied to many distributed ontologies with limited resources. In this direction, future work is certainly required to achieve a sustainable solution with a sound understanding of resources and quality.
  • Besides, future research can be more beneficial towards robot autonomy if the potential effects of ontology-based KR systems with context awareness are considered more carefully.
  • Looking forward, future research should be continued in more realistic settings for the development of culturally competent KRSs, which will endow the robots the ability to perform various complex tasks in dynamic environments by understanding the culture-specific needs and preferences.
To summarize our discussion, the results achieved in ontology-based KR systems for autonomous robotic systems so far seem tentative. Therefore, we highlighted the challenges and research gaps or limitations that require more work to be done in the future. It is expected that under the auspices of artificial intelligence, semantic web technology, and other accompanying ideas or visions, the development of this field with real-world robotic applications will continue to advance.

6. Conclusions

This survey was intended to show the recent developments in ontology-based knowledge representation systems for robotics that can lay the groundwork to inspire future research in this area. To summarize, in the present article, we began by enumerating the importance of ontology-based knowledge representation systems for robotics in domestic, hospital, and industrial environments. Then, in the context of social robots’ capabilities, we focused on the recent publications related to robotic knowledge representation systems between the years 2014 and 2020. We selected ontology-based knowledge base systems for our survey using inclusion and exclusion criteria. Afterward, we evaluated them based on seven research questions and investigated their ideas and contributions that enable the robots to have semantic capabilities. In this context, our integrated overview of ontology-based semantic knowledge representation systems also paid special attention to their ontologies, reasoning capabilities, development tools, architecture, applications, and limitations. Finally, we concluded this paper with a discussion and suggested some promising research directions for future work.

Author Contributions

Conceptualization, S.M., S.-H.J., Y.G.R. and T.-Y.K.; Methodology, S.M., S.-H.J. and Y.G.R.; Validation, S.M., S.-H.J., Y.G.R. and T.-Y.K.; Formal analysis, S.M. and T.-Y.K.; Investigation, S.M., S.-H.B., E.-J.K. and K.-J.J.; Data curation, S.M., S.-H.J., Y.G.R., S.-H.B., E.-J.K. and K.-J.J.; Writing—original draft preparation, S.M., S.-H.J. and Y.G.R.; Writing—review and editing, S.M., S.-H.J., Y.G.R., S.-H.B., E.-J.K. and K.-J.J.; Visualization, S.M., S.-H.B., E.-J.K. and K.-J.J.; Supervision, T.-Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Korea Evaluation Institute of Industrial Technology(KEIT) funded by the Ministry of Trade, Industry & Energy (MOTIE) (No. 1415162820 and 1415164140).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Olszewska, J.I.; Barreto, M.; Bermejo-Alonso, J.; Carbonera, J.; Chibani, A.; Fiorini, S.; Goncalves, P.; Habib, M.; Khamis, A.; Olivares, A.; et al. Ontology for autonomous robotics. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August 2017; pp. 189–194. [Google Scholar]
  2. Bayat, B.; Bermejo-Alonso, J.; Carbonera, J.; Facchinetti, T.; Fiorini, S.; Goncalves, P.; Jorge, V.A.; Habib, M.; Khamis, A.; Melo, K.; et al. Requirements for building an ontology for autonomous robots. Ind. Robot. Int. J. 2016, 43, 469–480. [Google Scholar] [CrossRef]
  3. Čaić, M.; Mahr, D.; Oderkerken-Schröder, G. Value of social robots in services: Social cognition perspective. J. Serv. Mark. 2019, 33, 463–478. [Google Scholar] [CrossRef]
  4. Lim, G.H.; Suh, I.H.; Suh, H. Ontology-based unified robot knowledge for service robots in indoor environments. IEEE Trans. Syst. Man Cybern. Part Syst. Hum. 2010, 41, 492–509. [Google Scholar] [CrossRef]
  5. Munir, K.; Anjum, M.S. The use of ontologies for effective knowledge modelling and information retrieval. Appl. Comput. Inform. 2018, 14, 116–126. [Google Scholar] [CrossRef]
  6. Olivares-Alarcos, A.; Beßler, D.; Khamis, A.; Goncalves, P.; Habib, M.K.; Bermejo-Alonso, J.; Barreto, M.; Diab, M.; Rosell, J.; Quintas, J.; et al. A review and comparison of ontology-based approaches to robot autonomy. Knowl. Eng. Rev. 2019, 34, e29. [Google Scholar] [CrossRef] [Green Version]
  7. Topp, E.A.; Stenmark, M.; Ganslandt, A.; Svensson, A.; Haage, M.; Malec, J. Ontology-Based Knowledge Representation for Increased Skill Reusability in Industrial Robots. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5672–5678. [Google Scholar]
  8. Azevedo, H.; Belo, J.P.R.; Romero, R.A. OntPercept: A Perception Ontology for Robotic Systems. In Proceedings of the 2018 IEEE Latin American Robotic Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE), Joao Pessoa, Brazil, 6–10 November 2018; pp. 469–475. [Google Scholar]
  9. Joo, S.H.; Manzoor, S.; Rocha, Y.G.; Bae, S.H.; Lee, K.H.; Kuc, T.Y.; Kim, M. Autonomous navigation framework for intelligent robots based on a semantic environment modeling. Appl. Sci. 2020, 10, 3219. [Google Scholar] [CrossRef]
  10. Manzoor, S.; Joo, S.H.; Rocha, Y.G.; Lee, H.U.; Kuc, T.Y. A Novel Semantic SLAM Framework for Humanlike High-Level Interaction and Planning in Global Environment. In Proceedings of the 1st International Workshop on the Semantic Descriptor, Semantic Modeling and Mapping for Humanlike Perception and Navigation of Mobile Robots toward Large Scale Long-Term Autonomy (SDMM1), Macau, China, 8 November 2019. [Google Scholar]
  11. Ersen, M.; Oztop, E.; Sariel, S. Cognition-enabled robot manipulation in human environments: Requirements, recent work, and open problems. IEEE Robot. Autom. Mag. 2017, 24, 108–122. [Google Scholar] [CrossRef]
  12. Perzylo, A.; Somani, N.; Profanter, S.; Kessler, I.; Rickert, M.; Knoll, A. Intuitive instruction of industrial robots: Semantic process descriptions for small lot production. In Proceedings of the 2016 IEEE/rsj International Conference on Intelligent Robots and Systems (IROS), Daejeon Convention Center, DaeJeon, Korea, 9–14 October 2016; pp. 2293–2300. [Google Scholar]
  13. Waibel, M.; Beetz, M.; Civera, J.; d’Andrea, R.; Elfring, J.; Galvez-Lopez, D.; Häussermann, K.; Janssen, R.; Montiel, J.; Perzylo, A.; et al. Roboearth. IEEE Robot. Autom. Mag. 2011, 18, 69–82. [Google Scholar] [CrossRef] [Green Version]
  14. Tenorth, M.; Beetz, M. KnowRob: A knowledge processing infrastructure for cognition-enabled robots. Int. J. Robot. Res. 2013, 32, 566–590. [Google Scholar] [CrossRef]
  15. Beetz, M.; Tenorth, M.; Winkler, J. Open-ease. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Washington, DC, USA, 25–30 May 2015; pp. 1983–1990. [Google Scholar]
  16. Saxena, A.; Jain, A.; Sener, O.; Jami, A.; Misra, D.K.; Koppula, H.S. Robobrain: Large-scale knowledge engine for robots. arXiv 2014, arXiv:1412.0691. [Google Scholar]
  17. Daruna, A.; Liu, W.; Kira, Z.; Chetnova, S. Robocse: Robot common sense embedding. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9777–9783. [Google Scholar]
  18. Poux, F.; Ponciano, J.J. Self-Learning Ontology For Instance Segmentation of 3d Indoor Point Cloud. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 309–316. [Google Scholar] [CrossRef]
  19. Kanjaruek, S.; Li, D.; Qiu, R.; Boonsim, N. Automated ontology framework for service robots. In Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; pp. 219–224. [Google Scholar]
  20. Kanjaruek, S.; Li, D. Tracking Objects Robot for healthcare environments. In Proceedings of the 2017 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Exeter, UK, 21–23 June 2017; pp. 894–899. [Google Scholar]
  21. Velardi, P.; Pazienza, M.T.; Fasolo, M. How to encode semantic knowledge: A method for meaning representation and computer-aided acquisition. Comput. Linguist. 1991, 17, 153–170. [Google Scholar]
  22. Gibaud, B.; Forestier, G.; Feldmann, C.; Ferrigno, G.; Gonçalves, P.; Haidegger, T.; Julliard, C.; Katić, D.; Kenngott, H.; Maier-Hein, L.; et al. Toward a standard ontology of surgical process models. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1397–1408. [Google Scholar] [CrossRef] [Green Version]
  23. Björkelund, A.; Bruyninckx, H.; Malec, J.; Nilsson, K.; Nugues, P. Knowledge for Intelligent Industrial Robots. In Proceedings of the AAAI Spring Symposium on Designing Intelligent Robots: Reintegrating AI, Palo Alto, CA, USA, 26 March 2012; Volume 12, p. 2. [Google Scholar]
  24. Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and human-robot co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
  25. Skobelev, P.; Borovik, S.Y. On the way from Industry 4.0 to Industry 5.0: From digital manufacturing to digital society. Ind. 4.0 2017, 2, 307–311. [Google Scholar]
  26. Sun, X.; Zhang, Y. A Review of Domain Knowledge Representation for Robot Task Planning. In Proceedings of the 2019 4th International Conference on Mathematics and Artificial Intelligence, Chengdu, China, 12–15 April 2019; pp. 176–183. [Google Scholar]
  27. Thosar, M.; Zug, S.; Skaria, A.M.; Jain, A. A Review of Knowledge Bases for Service Robots in Household Environments. 2018, pp. 98–110. Available online: https://www.researchgate.net/publication/328249457_A_Review_of_Knowledge_Bases_for_Service_Robots_in_Household_Environments (accessed on 29 April 2021).
  28. Gouidis, F.; Vassiliades, A.; Patkos, T.; Argyros, A.; Bassiliades, N.; Plexousakis, D. A review on intelligent object perception methods combining knowledge-based reasoning and machine learning. arXiv 2019, arXiv:1912.11861. [Google Scholar]
  29. Stojanovic, L.; Schneider, J.; Maedche, A.; Libischer, S.; Studer, R.; Lumpp, T.; Abecker, A.; Breiter, G.; Dinger, J. The role of ontologies in autonomic computing systems. IBM Syst. J. 2004, 43, 598–616. [Google Scholar] [CrossRef]
  30. Bermejo-Alonso, J.; Sanz, R.; Rodríguez, M.; Hernández, C. Ontology-based engineering of autonomous systems. In Proceedings of the 2010 IEEE Sixth International Conference on Autonomic and Autonomous Systems, Cancun, Mexico, 7–13 March 2010; pp. 47–51. [Google Scholar]
  31. Bermejo-Alonso, J.; Hernández, C.; Sanz, R. Model-based engineering of autonomous systems using ontologies and metamodels. In Proceedings of the 2016 IEEE International Symposium on Systems Engineering (ISSE), Edinburgh, UK, 21–26 July 2016; pp. 1–8. [Google Scholar]
  32. Bermejo-Alonso, J.; Sanz, R.; Rodríguez, M.; Hernández, C. Ontology engineering for the autonomous systems domain. In International Joint Conference on Knowledge Discovery, Knowledge Engineering, and Knowledge Management; Springer: Berlin/Heidelberg, Germany, 2011; pp. 263–277. [Google Scholar]
  33. Bozcuoğlu, A.K.; Kazhoyan, G.; Furuta, Y.; Stelter, S.; Beetz, M.; Okada, K.; Inaba, M. The exchange of knowledge using cloud robotics. IEEE Robot. Autom. Lett. 2018, 3, 1072–1079. [Google Scholar] [CrossRef]
  34. de Freitas, E.P.; Olszewska, J.I.; Carbonera, J.L.; Fiorini, S.R.; Khamis, A.; Ragavan, S.V.; Barreto, M.E.; Prestes, E.; Habib, M.K.; Redfield, S.; et al. Ontological concepts for information sharing in cloud robotics. J. Ambient. Intell. Humaniz. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
  35. Dogmus, Z.; Erdem, E.; Patoglu, V. RehabRobo-Onto: Design, development and maintenance of a rehabilitation robotics ontology on the cloud. Robot. Comput. Integr. Manuf. 2015, 33, 100–109. [Google Scholar] [CrossRef]
  36. Balakirsky, S.; Kootbally, Z.; Schlenoff, C.; Kramer, T.; Gupta, S. An industrial robotic knowledge representation for kit building applications. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 1365–1370. [Google Scholar]
  37. Schäfer, F.; Kriesten, R.; Chrenko, D.; Gechter, F. No need to learn from each other? Potentials of knowledge modeling in autonomous vehicle systems engineering towards new methods in multidisciplinary contexts. In Proceedings of the 2017 IEEE International Conference on Engineering, Technology and Innovation (ICE/ITMC), Madeira, Portugal, 27–29 June 2017; pp. 462–468. [Google Scholar]
  38. Jäger, G.; Mueller, C.A.; Thosar, M.; Zug, S.; Birk, A. Towards robot-centric conceptual knowledge acquisition. arXiv 2018, arXiv:1810.03583. [Google Scholar]
  39. Beßler, D.; Koralewski, S.; Beetz, M. Knowledge Representation for Cognition- and Learning-enabled Robot Manipulation. Available online: https://www.semanticscholar.org/paper/Knowledge-Representation-for-Cognition-and-Robot-Be%C3%9Fler-Koralewski/a912517f69db6dd78f80c249320d5a781a67a70d (accessed on 29 April 2021).
  40. Fischer, L.; Hasler, S.; Deigmöller, J.; Schnürer, T.; Redert, M.; Pluntke, U.; Nagel, K.; Senzel, C.; Ploennigs, J.; Richter, A.; et al. Which Tool to Use? Grounded Reasoning in Everyday Environments with Assistant Robots. CogRob@ KR. 2018, pp. 3–10. Available online: https://www.semanticscholar.org/paper/Which-tool-to-use-Grounded-reasoning-in-everyday-Fischer-Hasler/25c3841a905553a370f89d657f3376f63207dc3b (accessed on 29 April 2021).
  41. Pinacho, L.S.; Wich, A.; Yazdani, F.; Beetz, M. Acquiring knowledge of object arrangements from human examples for household robots. In Joint German/Austrian Conference on Artificial Intelligence (Künstliche Intelligenz); Springer: Berlin/Heidelberg, Germany, 2018; pp. 131–138. [Google Scholar]
  42. Yang, G.; Wang, S.; Yang, J. Desire-driven reasoning for personal care robots. IEEE Access 2019, 7, 75203–75212. [Google Scholar] [CrossRef]
  43. Vassiliades, A.; Bassiliades, N.; Gouidis, F.; Patkos, T. A Knowledge Retrieval Framework for Household Objects and Actions with External Knowledge. InInternational Conference on Semantic Systems; Springer: Cham, Switzerland, 2020; pp. 36–52. [Google Scholar]
  44. Gehrig, D.; Krauthausen, P.; Rybok, L.; Kuehne, H.; Hanebeck, U.D.; Schultz, T.; Stiefelhagen, R. Combined intention, activity, and motion recognition for a humanoid household robot. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 4819–4825. [Google Scholar]
  45. Patterson, D.J.; Fox, D.; Kautz, H.; Philipose, M. Fine-grained activity recognition by aggregating abstract object usage. In Proceedings of the Ninth IEEE International Symposium on Wearable Computers (ISWC’05), Osaka, Japan, 18–21 October 2005; pp. 44–51. [Google Scholar]
  46. Ramirez-Amaro, K.; Beetz, M.; Cheng, G. Transferring skills to humanoid robots by extracting semantic representations from observations of human activities. Artif. Intell. 2017, 247, 95–118. [Google Scholar] [CrossRef]
  47. Lemaignan, S.; Warnier, M.; Sisbot, E.A.; Clodic, A.; Alami, R. Artificial cognition for social human–robot interaction: An implementation. Artif. Intell. 2017, 247, 45–69. [Google Scholar] [CrossRef] [Green Version]
  48. Agostini, A.; Torras, C.; Wörgötter, F. Learning weakly correlated cause–effects for gardening with a cognitive system. Eng. Appl. Artif. Intell. 2014, 36, 178–194. [Google Scholar] [CrossRef] [Green Version]
  49. Agostini, A.; Torras, C.; Wörgötter, F. Efficient interactive decision making framework for robotic applications. Artif. Intell. 2017, 247, 187–212. [Google Scholar] [CrossRef] [Green Version]
  50. Potaov, A. Enabling Cognitive Visual Question Answering. Available online: https://blog.singularitynet.io/enabling-cognitive-visual-question-answering-a93febd454a7 (accessed on 8 March 2021).
  51. He, B.; Xia, M.; Yu, X.; Jian, P.; Meng, H.; Chen, Z. An educational robot system of visual question answering for preschoolers. In Proceedings of the 2017 IEEE 2nd International Conference on Robotics and Automation Engineering (ICRAE), Shanghai, China, 29–31 December 2017; pp. 441–445. [Google Scholar]
  52. Keren, G.; Fridin, M. Kindergarten Social Assistive Robot (KindSAR) for children’s geometric thinking and metacognitive development in preschool education: A pilot study. Comput. Hum. Behav. 2014, 35, 400–412. [Google Scholar] [CrossRef]
  53. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [Green Version]
  54. Li, G.; Su, H.; Zhu, W. Incorporating external knowledge to answer open-domain visual questions with dynamic memory networks. arXiv 2017, arXiv:1712.00733. [Google Scholar]
  55. Narasimhan, M.; Schwing, A.G. Straight to the facts: Learning knowledge base retrieval for factual visual question answering. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 451–468. [Google Scholar]
  56. Wang, P.; Wu, Q.; Shen, C.; Hengel, A.v.d.; Dick, A. Explicit knowledge-based reasoning for visual question answering. arXiv 2015, arXiv:1511.02570. [Google Scholar]
  57. Wu, Q.; Wang, P.; Shen, C.; Dick, A.; Van Den Hengel, A. Ask me anything: Free-form visual question answering based on knowledge from external sources. In Proceedings of the IEEE Conference on Computer Vision and Pattern, Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4622–4630. [Google Scholar]
  58. Shah, S.; Mishra, A.; Yadati, N.; Talukdar, P.P. Kvqa: Knowledge-aware visual question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, Hi, USA, 27 January–1 February 2019; Volume 33, pp. 8876–8884. [Google Scholar]
  59. Wu, Q.; Teney, D.; Wang, P.; Shen, C.; Dick, A.; van den Hengel, A. Visual question answering: A survey of methods and datasets. Comput. Vis. Image Underst. 2017, 163, 21–40. [Google Scholar] [CrossRef] [Green Version]
  60. Malinowski, M.; Rohrbach, M.; Fritz, M. Ask your neurons: A neural-based approach to answering questions about images. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1–9. [Google Scholar]
  61. Ma, L.; Lu, Z.; Li, H. Learning to answer questions from image using convolutional neural network. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
  62. Zhu, Y.; Groth, O.; Bernstein, M.; Fei-Fei, L. Visual7w: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4995–5004. [Google Scholar]
  63. Xu, H.; Saenko, K. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. InEuropean Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 451–466. [Google Scholar]
  64. Chen, K.; Wang, J.; Chen, L.C.; Gao, H.; Xu, W.; Nevatia, R. Abc-cnn: An attention based convolutional neural network for visual question answering. arXiv 2015, arXiv:1511.05960. [Google Scholar]
  65. Yang, Z.; He, X.; Gao, J.; Deng, L.; Smola, A. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 21–29. [Google Scholar]
  66. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  67. Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. InEuropean Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
  68. Beetz, M.; Beßler, D.; Haidu, A.; Pomarlan, M.; Bozcuoğlu, A.K.; Bartels, G. Know rob 2.0—A 2nd generation knowledge processing framework for cognition-enabled robotic agents. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 512–519. [Google Scholar]
  69. Gonçalves, P.J.; Torres, P.M. Knowledge representation applied to robotic orthopedic surgery. Robot. Comput.-Integr. Manuf. 2015, 33, 90–99. [Google Scholar] [CrossRef]
  70. Bruno, B.; Chong, N.Y.; Kamide, H.; Kanoria, S.; Lee, J.; Lim, Y.; Pandey, A.K.; Papadopoulos, C.; Papadopoulos, I.; Pecora, F.; et al. The CARESSES EU-Japan project: Making assistive robots culturally competent. In Italian Forum of Ambient Assisted Living; Springer: Berlin/Heidelberg, Germany, 2017; pp. 151–169. [Google Scholar]
  71. Diab, M.; Akbari, A.; Ud Din, M.; Rosell, J. PMK—A Knowledge Processing Framework for Autonomous Robotics Perception and Manipulation. Sensors 2019, 19, 1166. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  72. Sun, X.; Zhang, Y.; Chen, J. High-Level Smart Decision Making of a Robot Based on Ontology in a Search and Rescue Scenario. Future Internet 2019, 11, 230. [Google Scholar] [CrossRef] [Green Version]
  73. Ribino, P.; Bonomolo, M.; Lodato, C.; Vitale, G. A Humanoid Social Robot Based Approach for Indoor Environment Quality Monitoring and Well-Being Improvement. Int. J. Soc. Robot. 2020, 13, 277–296. [Google Scholar] [CrossRef]
  74. Sabri, L.; Bouznad, S.; Rama Fiorini, S.; Chibani, A.; Prestes, E.; Amirat, Y. An integrated semantic framework for designing context-aware Internet of Robotic Things systems. Integr. Comput.-Aided Eng. 2018, 25, 137–156. [Google Scholar] [CrossRef]
  75. Chang, D.S.; Cho, G.H.; Choi, Y.S. Ontology-based knowledge model for human-robot interactive services. In Proceedings of the 35th Annual ACM Symposium on Applied Computing, Brno, Czech Republic, 30 March–3 April 2020; pp. 2029–2038. [Google Scholar]
  76. Sadik, A.R.; Urban, B. An ontology-based approach to enable knowledge representation and reasoning in worker-cobot agile manufacturing. Future Internet 2017, 9, 90. [Google Scholar] [CrossRef] [Green Version]
  77. Kootbally, Z.; Kramer, T.R.; Schlenoff, C.; Gupta, S.K. Implementation of an ontology-based approach to enable agility in kit building applications. Int. J. Semant. Comput. 2018, 12, 5–24. [Google Scholar] [CrossRef]
  78. Gonçalves, P. Towards an ontology for orthopaedic surgery, application to hip resurfacing. In Proceedings of the Hamlyn Symposium on Medical Robotics, London, UK, 22–25 June 2013; pp. 61–62. [Google Scholar]
  79. Diab, M.; Akbari, A.; Rosell, J. An ontology framework for physics-based manipulation planning. In Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2017; pp. 452–464. [Google Scholar]
  80. Zhao, J.; Gao, J.; Zhao, F.; Liu, Y. A search-and-rescue robot system for remotely sensing the underground coal mine environment. Sensors 2017, 17, 2426. [Google Scholar] [CrossRef] [Green Version]
  81. Bujari, A.; Calafate, C.T.; Cano, J.C.; Manzoni, P.; Palazzi, C.E.; Ronzani, D. A location-aware waypoint-based routing protocol for airborne DTNs in search and rescue scenarios. Sensors 2018, 18, 3758. [Google Scholar] [CrossRef] [Green Version]
  82. Matarić, M.J. Socially assistive robotics: Human augmentation versus automation. Sci. Robot. 2017, 2, eaam5410. [Google Scholar] [CrossRef] [PubMed]
  83. Rossi, S.; Staffa, M.; Tamburro, A. Socially assistive robot for providing recommendations: Comparing a humanoid robot with a mobile application. Int. J. Soc. Robot. 2018, 10, 265–278. [Google Scholar] [CrossRef]
  84. Choi, J.H.; Lee, K. Investigation of the feasibility of POE methodology for a modern commercial office building. Build. Environ. 2018, 143, 591–604. [Google Scholar] [CrossRef]
  85. McGuinness, D.L.; Van Harmelen, F. OWL web ontology language overview. W3C Recomm. 2004, 10, 2004. [Google Scholar]
  86. Wielemaker, J.; Schrijvers, T.; Triska, M.; Lager, T. Swi-prolog. arXiv 2010, arXiv:1011.5332. [Google Scholar] [CrossRef] [Green Version]
  87. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Wheeler, R.; Ng, A.Y. ROS: An open-source Robot Operating System. In Proceedings of the ICRA Workshop on Open Source Software, Kobe, Japan, 12 May 2009; Volume 3, p. 5. [Google Scholar]
  88. Bordini, R.H.; Hübner, J.F. BDI agent programming in AgentSpeak using Jason. InInternational Workshop on Computational Logic in Multi-Agent Systems; Springer: Berlin/Heidelberg, Germany, 2005; pp. 143–164. [Google Scholar]
  89. AgentSpeak, A.R. AgentSpeak (L): BDI agents speak out in a logical computable language. Aust. Artif. Intell. Inst. 1996, 1, 42–55. [Google Scholar]
  90. Amarilli, F.; Amigoni, F.; Fugini, M.G.; Zarri, G.P. A semantic-rich approach to IoT using the generalized world entities paradigm. In Manag. Web Things; Elsevier: Amsterdam, The Netherlands, 2017; pp. 105–147. [Google Scholar]
  91. Bellifemine, F.; Bergenti, F.; Caire, G.; Poggi, A. JADE—A java agent development framework. In Multi-Agent Programming; Springer: Berlin/Heidelberg, Germany, 2005; pp. 125–147. [Google Scholar]
  92. Chaib-draa, B.; Dignum, F. Trends in agent communication language. Comput. Intell. 2002, 18, 89–101. [Google Scholar] [CrossRef]
  93. Prestes, E.; Carbonera, J.L.; Fiorini, S.R.; Jorge, V.A.; Abel, M.; Madhavan, R.; Locoro, A.; Goncalves, P.; Barreto, M.E.; Habib, M.; et al. Towards a core ontology for robotics and automation. Robot. Auton. Syst. 2013, 61, 1193–1204. [Google Scholar] [CrossRef]
  94. Ferro, E.; Girolami, M.; Salvi, D.; Mayer, C.; Gorman, J.; Grguric, A.; Ram, R.; Sadat, R.; Giannoutakis, K.M.; Stocklöw, C. The universaal platform for aal (ambient assisted living). J. Intell. Syst. 2015, 24, 301–319. [Google Scholar] [CrossRef]
  95. Hoffmann, J.; Nebel, B. The FF planning system: Fast plan generation through heuristic search. J. Artif. Intell. Res. 2001, 14, 253–302. [Google Scholar] [CrossRef]
  96. Akbari, A.; Rosell, J. Task planning using physics-based heuristics on manipulation actions. In Proceedings of the 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA), Berlin, Germany, 6–9 September 2016; pp. 1–8. [Google Scholar]
  97. Fox, M.; Long, D. PDDL2. 1: An extension to PDDL for expressing temporal planning domains. J. Artif. Intell. Res. 2003, 20, 61–124. [Google Scholar] [CrossRef]
  98. Wang, A.Y.; Sable, J.H.; Spackman, K.A. The SNOMED clinical terms development process: Refinement and analysis of content. In Proceedings of the AMIA Symposium, San Antonio, TX, USA, 9–13 November 2002; p. 845. [Google Scholar]
  99. Tenorth, M.; Beetz, M. KnowRob—Knowledge processing for autonomous personal robots. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 4261–4266. [Google Scholar]
  100. Barattini, P.; Vicentini, F.; Virk, G.S.; Haidegger, T. Human-Robot Interaction: Safety, Standardization, and Benchmarking; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  101. Tenorth, M.; Beetz, M. Representations for robot knowledge in the KnowRob framework. Artif. Intell. 2017, 247, 151–169. [Google Scholar] [CrossRef]
  102. Team, J.D. Drools Expert User Guide. Available online: https://docs.jboss.org/drools/release/5.2.0.CR1/drools-expert-docs/html_single/ (accessed on 21 December 2020).
  103. Sottara, D.; Mello, P.; Proctor, M. A configurable rete-oo engine for reasoning with different types of imperfect information. IEEE Trans. Knowl. Data Eng. 2010, 22, 1535–1548. [Google Scholar] [CrossRef]
  104. Tenorth, M.; Kunze, L.; Jain, D.; Beetz, M. Knowrob-map-knowledge-linked semantic object maps. In Proceedings of the 2010 IEEE 10th IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, USA, 6–8 December 2010; pp. 430–435. [Google Scholar]
  105. Sadik, A.R.; Urban, B.; Adel, O. Using hand gestures to interact with an industrial robot in a cooperative flexible manufacturing scenario. In Proceedings of the 3rd International Conference on Mechatronics and Robotics Engineering, Paris, France, 8–12 February 2017; pp. 11–16. [Google Scholar]
  106. Gleeson, B.; MacLean, K.; Haddadi, A.; Croft, E.; Alcazar, J. Gestures for industry intuitive human-robot communication from human observation. In Proceedings of the 2013 IEEE 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 3–6 March 2013; pp. 349–356. [Google Scholar]
  107. Bruno, B.; Menicatti, R.; Recchiuto, C.T.; Lagrue, E.; Pandey, A.K.; Sgorbissa, A. Culturally-competent human-robot verbal interaction. In Proceedings of the 2018 IEEE 15th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 26–30 June 2018; pp. 388–395. [Google Scholar]
  108. Bruno, B.; Recchiuto, C.T.; Papadopoulos, I.; Saffiotti, A.; Koulouglioti, C.; Menicatti, R.; Mastrogiovanni, F.; Zaccaria, R.; Sgorbissa, A. Knowledge representation for culturally competent personal robots: Requirements, design principles, implementation, and assessment. Int. J. Soc. Robot. 2019, 11, 515–538. [Google Scholar] [CrossRef] [Green Version]
  109. Mansouri, M.; Pecora, F. A robot sets a table: A case for hybrid reasoning with different types of knowledge. J. Exp. Theor. Artif. Intell. 2016, 28, 801–821. [Google Scholar] [CrossRef]
  110. Köckemann, U.; Pecora, F.; Karlsson, L. Grandpa Hates Robots-Interaction Constraints for Planning in Inhabited Environments. In Proceedings of the 28th National Conference on Artifical Intelligence AAAI, Quebec, QC, Canada, 27–31 July 2014; pp. 2293–2299. [Google Scholar]
  111. Khaliq, A.A.; Köckemann, U.; Pecora, F.; Saffiotti, A.; Bruno, B.; Recchiuto, C.T.; Sgorbissa, A.; Bui, H.D.; Chong, N.Y. Culturally aware planning and execution of robot actions. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 326–332. [Google Scholar]
  112. Fuller, J. Transcultural Health and Social Care: Development of Culturally Competent Practitioners; Elsevier Health Science: Amsterdam, The Netherlands, 2007. [Google Scholar]
  113. Laboratories, S.N. JESS. Available online: http://alvarestech.com/temp/fuzzyjess/Jess60/Jess70b7/docs/index.html (accessed on 18 April 2021).
  114. Browne, P. JBoss Drools Business Rules; Packt Publishing Ltd.: Birmingham, UK, 2009. [Google Scholar]
Figure 1. Survey domain and scope.
Figure 1. Survey domain and scope.
Applsci 11 04324 g001
Figure 2. Inclusion and exclusion criteria for KR systems reviewed in our survey (Section 3).
Figure 2. Inclusion and exclusion criteria for KR systems reviewed in our survey (Section 3).
Applsci 11 04324 g002
Figure 3. Survey structure and its corresponding sections.
Figure 3. Survey structure and its corresponding sections.
Applsci 11 04324 g003
Table 2. KR: evaluation criteria.
Table 2. KR: evaluation criteria.
#Research QuestionSections
1What is the application domain?Section 3.1
2What is the basic idea and main contribution?Section 3.2
3Which development tools have been used?Section 3.3
4What is the architecture?Section 3.4
5What is the ontology scope?Section 3.5
6What is the reasoning scope?Section 3.6
7What are the limitations?Section 3.7
Table 3. KRSs: domain and application scope.
Table 3. KRSs: domain and application scope.
KRSDomainApplication
KnowRobDomesticHousehold manipulation task in the kitchen
OROSUMedical/hospitalPerforms surgical procedures
CARESSESDomesticCulturally competent assistive robot for elderly people
PMKDomesticIndoor manipulation and motion planning to perform tasks such as serving a cup
SARbotDomesticDisaster search and rescue operations
IEQDomesticAn interactive humanoid social robot that provides suggestions
SmartRulesDomesticMonitoring and assisting elderly people
ARBIMedical/hospitalPerforms the duty of robotic receptionist in the hospital
Worker-cobotIndustrial/manufacturingEstablishes collaboration between human workers and industrial robots
APRSIndustrial/manufacturingKit building
Table 4. KRSs: idea and contribution.
Table 4. KRSs: idea and contribution.
KRSSolution/Contribution
KnowRobGoes beyond the local knowledge bases.
Builds knowledge-enabled cloud-based systems.
Relies on ontologies and semantic web technologies.
Integrates physics simulation-based reasoning and game engine-based rendering techniques.
OROSUIntegrates ontologies from the health care and robotics fields.
Develops KRS for human body surgeries using robots.
Tracks robotic actions and maintains pose information in drilling tasks.
CARESSESEndows the robot with communication skills through speech, gesture, recognition, and culturally aware capabilities.
Enables the robot to change its behavior by adopting an individual’s culture.
PMKFormalizes ontological representation for semantic perception and manipulation.
Performs TAMP by including sensing information, semantic, geometric, and spatial reasoning with ontological concepts.
SARbotEnables robotic search and rescue operations in an unknown environment by:
Endowing the robot with high-level control and supporting decision making using the ontology.
IEQDevelops the IEQ ontology for monitoring indoor environment quality.
Integrates IEQ with post-occupancy evaluation (POE).
Enables the robot to make appropriate suggestions based on the individual’s preferences to control the indoor temperature.
Smart RulesOvercomes the limitations of standalone robots.
Develops a context-aware IoRT knowledge representation system.
Allows human-robot interaction in both the physical and cyber world.
Deploys rules based on an environment ontology.
ARBIUses symbolic representation for a better understanding of the environment.
Develops an integrated model based on the ontology.
Endows the robot to perform human-robot interactive services.
Worker-cobotEnables the robot to work in collaboration with human workers while sharing the same manufacturing unit.
Achieves agile manufacturing through ontology-based MAS and BRMS.
APRSIntroduces an ontology-based model for kitting process.
Empowers the robot with agility.
Table 5. KRSs: development tools.
Table 5. KRSs: development tools.
KRSDevelopment Tools
KnowRobOWL, SWI-Prolog
OROSUOWL
CARESSESOWL, Bayesian Networks
PMKOWL, SWI-Prolog
SARbotOWL, SWRL, JESS
IEQNormative
SmartRulesSmartRules sub-language, μ -Concept
ARBIOWL, Prolog, SPARQL
Worker-cobotJADE, ACL
APRSXSDL
Table 6. KRSs: architectural components.
Table 6. KRSs: architectural components.
KRSMajor Elements of Architecture
KnowRobThree componentsInterface shell, logic-based language, hybrid reasoning shell
OROSUMultiple ontologiesRobotic and medical ontologies
CARESSESThree modulesCultural knowledge base (CKB), culturally sensitive planning and execution, culture-aware human-robot interaction
PMKFour major componentsPerception module, PM framework, TAMP planning, execution module
SARbotThree-level controlLow-, middle-, and high-level controls
IEQSeven componentsKnowledge base, dialog module, speech recognition module, light_Sound, and Therm_Hygrometricdata acquisition modules, normative reasoner module, suggestion module
SmartRulesTwo software layersLower abstract layer, top reasoning layer
ARBIThree componentsKnowledge manager, task planner, context reasoner
Worker-cobotThree stepsHolonic Control Architecture (HCA), knowledge exchange and reasoning step
APRSThree modelsKitting workstation, action model, robot capability model
Table 7. KRSs: ontology scope.
Table 7. KRSs: ontology scope.
KRSOntologiesConcepts/Classes
KnowRobInner world, virtual, logical, episodic memories, and DULTemporal, spatial, and mathematical things
OROSUCORA, human anatomical, PARTS, and SUMOMedical sensing and manipulation action
CARESSESModular structureNot defined in [70]
PMKMeta ontologyFeature, WSobject, WSpace, actor, sensor, context reasoning, action
SARbotEntity, environment, and taskSLAM, object, task, and environment
IEQIEQOccupant, environment, and recommendation
SmartRulesMicro, DUL,Object, person, and robot
ARBIISRO, user, robot, action, perception, and environmentPerson, SocialConcept, object, robot, and event
Worker-cobotAgent, agent administrativeNot defined in [76]
APRSThreePointType, PartType
Table 8. Summary of the evaluation results for ontological components and cognitive capabilities.
Table 8. Summary of the evaluation results for ontological components and cognitive capabilities.
Kno-wRobOROUSCARE-SSESPMKSARBotIEQSmart-RulesARBIWorke-rcoBotAPRS
Onto-logy
Compo-nents
ObjectDefinitionD*D-*D--*D*D--
ConceptCC*CC*C*CCC*C*C
EntityPPPPPNP, VPPP
TypeR,SHST, GKOMT, GSS
Enviro-nmentDefinition-E-E-E-E-E-E-E-E-E-E
TypeSS-SS, G--S--
ConceptPIPI-PIPIPI-PIPIPI-PI-PI
ActionDefinition*A--A---A----
ConceptE*E-E*E*E-E*E*E*E*E
Cogn-itive
Capabi-lities
Inter-action
Based on
Visual
Recogn-ition
1*--oo----o
2o----o-
3o---o--
4-o-----
5---o---
6---oo--
7----o--
8----o--
Interaction Based on
Voice Recognition
--o--o-o--
Task Execution and
Task Planning
oooooooooo
Object-> D: object definition in natural language is given; *D: object definition in natural language was taken from another ontology; C: object concept is given; *C: object concept is given, but its natural language definition (D and *D) is not explained. Entity (P: physical; N: non-tangible, i.e., air; V: virtual, i.e., virtual device) Type (S: specific; R: general; K: known and unknown; O: observable, i.e., temperature; M: manageable, i.e., active, static; H: human body parts, i.e., femur; G: group parts, i.e., chairs; T: individual parts, i.e., hinges). Environment-> -E: no natural language definition of the mapping environment is provided; PI: place concept is given; -PI: no place concept; -M: no map of the environment; S: semantic map; G: grid map. Action-> *A: action definition was taken from another ontology; -A,-E: no action definition and its ontological concept is given; E: action concept is described; *E: action concept is given, but its natural language definition is not given. *: recognition by acquiring knowledge from observations; -: not available; o: available.
Table 9. KRSs: reasoning scope.
Table 9. KRSs: reasoning scope.
KRSReasoning Scope
KnowROBHybrid reasoning, simulation-based reasoning, motion control reasoning
OROSUAction reasoning using HermiT and Pellet reasoners
CARESSESCultural knowledge-based reasoning
PMKPerceptual reasoning, reasoning for object features, situation analysis, and planning
SARbotTask reasoning
IEQNormative reasoning
SmartRulesReactive reasoning
ARBILogical reasoning
Worker-cobotReasoning for interaction
APRSReasoning based on environmental knowledge
Table 10. KRSs: Limitations.
Table 10. KRSs: Limitations.
KRSResearch Gaps/ Limitations
KnowRobMost of the works, however, focused on manipulation tasks only
OROSUThere is still a need for improvement for aligning medical and robotic ontologies due to the use of different upper ontologies
CARESSESThe proposed usage of CARESSESS cultural knowledge at a large scale might be challenging in robots with a strong bias; its cultural KB is built by hand with the help of experts
PMKAlthough it provides general knowledge, some concepts are not well defined such as context-aware temporal and spatial relations, sensors’ knowledge, and task representation
SARbotIn a disaster SAR scenario, there is still a need to study the task planning for multiple heterogeneous robots in an uncertain environment
IEQTo implement the IEQ, the environment should be robot-friendly, and the the user should have the correct pronunciation for speech interaction with the robot
Smart RulesSome limitations of SmartRules include: the assumption of IoT objects known in advance; it does not deal with novelty autonomously; it needs a better method to bridge the semantic gap between entity descriptions and their representation; it does not support reasoning in the unfamiliar model using the current states of SmartRules and μ -Concept
ARBIIt requires extending the knowledge model to support a more socially relatable user experience
Worker-cobotIts case study addresses only a few operation resources
APRSThe APRS project shows that despite significant efforts to improve agility in manufacturing kitting, more research is needed to deal with action failures.
Table 11. Summary of the evaluation results of ontology-based knowledge representation systems in seven dimensions.
Table 11. Summary of the evaluation results of ontology-based knowledge representation systems in seven dimensions.
#KRSApplicationsIdea and
Contribution
Development
Tools
ArchitectureOntology
Scope
Reasoning
Scope
Limitation(s)
1KnowRobTable 3:
KRS:
Applications
and
Domain
Scope
Table 4:
KRS:
Idea and
Contribution
Table 5:
KRS:
Development
Tools
Table 6:
KRS:
Architectural
Components
Table 7:
KRS:
Ontology
Scope

Table 8:
Ontological
Components
Table 9:
KRS:
Reasoning
Scope

Table 8:
Reasoning
-based
Cognitive
Capabilities
Table 10:
KRS:
Limitations
2OROSU
3CARESSES
4PMK
5SARbot
6IEQ
7SmartRules
8ARBI
9Worker-cobot
10ARPS
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Manzoor, S.; Rocha, Y.G.; Joo, S.-H.; Bae, S.-H.; Kim, E.-J.; Joo, K.-J.; Kuc, T.-Y. Ontology-Based Knowledge Representation in Robotic Systems: A Survey Oriented toward Applications. Appl. Sci. 2021, 11, 4324. https://0-doi-org.brum.beds.ac.uk/10.3390/app11104324

AMA Style

Manzoor S, Rocha YG, Joo S-H, Bae S-H, Kim E-J, Joo K-J, Kuc T-Y. Ontology-Based Knowledge Representation in Robotic Systems: A Survey Oriented toward Applications. Applied Sciences. 2021; 11(10):4324. https://0-doi-org.brum.beds.ac.uk/10.3390/app11104324

Chicago/Turabian Style

Manzoor, Sumaira, Yuri Goncalves Rocha, Sung-Hyeon Joo, Sang-Hyeon Bae, Eun-Jin Kim, Kyeong-Jin Joo, and Tae-Yong Kuc. 2021. "Ontology-Based Knowledge Representation in Robotic Systems: A Survey Oriented toward Applications" Applied Sciences 11, no. 10: 4324. https://0-doi-org.brum.beds.ac.uk/10.3390/app11104324

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop