2.1. Characteristics of Human Knowledge
The human knowledge is a long historical development and is in the focus of speculations, discussion, and investigations through many centuries [24
]. As stated by Berkeley, G., the objects of human knowledge are all ideas that are either (i) actually imprinted on the senses, or (ii) perceived by attending to one’s own emotions and mental activities, or (iii) formed out of ideas of the first two types, with the help of memory and imagination, by compounding or dividing or simply reproducing ideas of those other two kinds [26
]. Wiig, K. defined human knowledge as a person’s beliefs, perspectives, concepts, judgments, expectations, methodologies, and know-how (which is a kind of representation of who the individual is) [27
], whereas Allee, V. defined it as experience of a person gained over time [28
]. Blackler, F. discussed that knowledge can be located in the (i) minds of people (“embrained” knowledge), (ii) bodies (embodied knowledge), (iii) organizational routines (embedded knowledge), (iv) dialogues (encultured knowledge), and (v) symbols (encoded knowledge) [29
]. It was regarded as the ability or capacity to act by Sveiby, K.E. [30
], as actionable understanding by Sehai, E. [31
], and as the capacity of effective action by Drucker, P.F. [32
As revealed by the above concise overview, knowledge is interpreted both as an acquired cognitive capacity and as an actionable potential in context. Running a bit ahead, the definitions rooted in the notion of actionable potential will play a specific role in the context of systems. The interpretations as a cognitive capacity assume consciousness, which is still a debated phenomenon of scientific research, though its fundamentality in terms of underpinning sensations, perceptions, thoughts, awareness, attention, understanding, ideation, etc. cannot be refuted [33
]. An adequate theory of consciousness is supposed to explain the (i) existence, (ii) causal efficacy, (iii) diversity, and (iv) phenomenal binding of knowledge [34
]. Likewise, the manifestation of human intelligence in the intangible cognitive and perceptive domains, the issues of consciousness will not be further discussed in any depth in this paper.
An essential characteristic of knowledge is its potential to distinguish, describe, explain, predict, and control natural or artificial phenomena in the context of real-life problems. Introduced by Newell, A., the concept of knowledge level concerns the nature of system knowledge above the symbol (or program) level that influence the actions, which the system will take to meet its goals given its knowledge [35
]. It is assumed that the intelligent behavior of the system is determined by its knowledge level potentials. Symbol-level mechanisms encode the system knowledge, whereas knowledge-level computational mechanisms purposefully transform and use knowledge in problem-solving. As Stokes, D.E. argued, “knowledge finds its purpose in action and action finds its reason in knowledge” [36
]. Consequently, the problem-solving potential of knowledge is of importance and not its syntactic or semantic representations. The semantics determines a mapping between symbols, combinations of symbols, propositions of the language, and concepts of the world to which they refer.
Actually, the problem-solving potential can be regarded as the enthalpy (internal capacity) of knowledge. Evidently, its exploitation can be facilitated by proper structuring and representation schemes. However, this internal capacity is not a substance that can be stored—it is hiding or embedded in the representations [37
]. This is often referred to as the knowledge engineering paradox. There are definitions that extrapolate from the personal knowledge and interpret it as an aggregated and stored asset of teams, collectives, and populations. However, only the explicit and formalized part of knowledge can be shared computationally as an asset. From an industrial management perspective, Zack, M.H. identified four characteristics of knowledge leading to difficulties: (i) complexity (cardinality and interactions of parts), (ii) uncertainty (lack of certain information), (iii) ambiguity (unclear meaning), and (iv) equivocality (multiple interpretations) [38
Ackoff, R.L. proposed to make distinction between data, information, and knowledge, which is helpful in providing a practical procedural definition for knowledge in real life [39
]. Based on this, human knowledge is interpreted from an information engineering perspective as the end product of subsequent aggregation and abstraction of signals, data, and information, even if they remain hidden or implicit in these processes. The construct exposing the hierarchical relationships of the mentioned cognitive objects is called knowledge pyramid [40
]. Though this interpretation of knowledge creation rightly captures the involved main cognitive activities (integration and abstraction), it also carries uncertainties [41
]. One source of uncertainty is that the underlying cognitive objects (i.e., signals, data, information, knowledge, wisdom, and intellect) do not have unique and universally accepted definitions. Inversely, they are either variously interpreted or interchangeably used in the literature. For instance, Gorman, M.E. proposed a taxonomy of knowledge, which considers information, skills, judgement, and wisdom as four types of knowledge [42
]. However, for a correct discussion of the subject matter, these need to be clearly distinguished and used rigorously. For our purpose, signals are seen as carriers of changes in the indicators of a phenomenon. Data are distinct descriptors of the attributes and states of a phenomenon. Information is the meaning of the data that allows us to answer questions about a phenomenon. Knowledge is the total of facts and know-how of treating a phenomenon, which eventually allows solving related cognitive or physical problems. Wisdom is a kind of meta-knowledge of what is true or right coupled with proper judgment as to action. Intellect is a synergetic aggregate of all abovementioned concepts, which manifests in the faculty of objective understanding, reasoning, and acting, especially with regard to abstract matters.
2.2. Categories of Human Knowledge
In the broadest sense, knowledge (i) is a multi-faceted ingredient of human intellect, (ii) has many manifestations and distinguishing characteristics, and (iii) can be seen from many perspectives. The latter creates bases for various classifications. Perhaps the most fundamental classification of knowledge theories is according to the owner of knowledge, which introduces the categories of (i) individual (differentiated) knowledge and (ii) organizational (integrated) knowledge. In fact, human knowledge can be differentiated based on its owner (producer or utilizer) that can be (i) individual, (ii) group/team, (iii) community/organization, (iv) population/inhabitants, and (v) mankind/civilization. In the view of Barley, W.C. et al., knowledge can be characterized by its position on three interrelated axes of attributes: (i) whether knowledge is explicit, (ii) where knowledge resides, and (iii) how knowledge is enacted [43
]. Byosiere, P. and Ingham, M. identified 40 different types of knowledge and categorized them into a broad typology of specific knowledge content areas (or domains of knowledge) [44
A typology was proposed by Alavi, M. and Leidner, D. [45
], which considered the explicit (formal and tangible) and implicit (tacit and intangible) nomenclature of Polányi, M. [46
]. Tacit knowledge is related to subconscious operation of the human brain, which does something automatically without almost thinking. This type of knowledge (know-how) is included in personal communications of domain experts, as the main knowledge source, but it is difficult to extract, elicit, and share. Typically, three categories of tacit knowledge are distinguished (i) instinctive (subconscious) knowledge of individuals, (ii) objectified (codified) knowledge of social systems, and (iii) collective tacit knowledge of a social systems.
De Jong, T. and Ferguson-Hessler, M.G. suggested a differentiation between knowledge-in-archive and knowledge-in-use, and emphasized the task dependence of knowledge classification and characterization (the context in which knowledge has a function) [47
]. Based on task analysis, they distinguished four types of knowledge: (i) situational, (ii) conceptual, (iii) procedural, and (iv) strategic knowledge. Other classifications sorted knowledge as (i) declarative (know-about), (ii) procedural (know-how), (iii) causal (know-why), (iv) conditional (know-when), and (v) relational (know-with) [38
], or based on their origin as (i) intuitive, (ii) authoritative, (iii) logical, and (iv) empirical [48
2.3. Characteristics of System Knowledge
All systems have to do with knowledge, but not all systems are knowledge-based with regard to their operation. The distinguishing characteristic of knowledge-based systems is a set of growing cognitive capabilities that makes their operation sophisticated, smart, or even intelligent. System knowledge is a purposeful higher-level organization and operationalization of information that creates operational potential for systems. It seems to be a defendable conjecture that there are as many forms of system knowledge as systems. As explained in the Introduction, the industrial revolutions brought about a confusingly broad spectrum of engineered systems, which reflect the trend of increasing intellectualization. In the context of human knowledge, Fahey, L. and Prusak, L. argued that knowledge does not exist independently of a knower [49
]. If this argument is correct and valid in the context of intellectualized engineered systems, then we should simultaneously consider the characteristics of the systems and that of the knowledge handled by them. As Fuchs, C. stated, whenever a complex system organizes itself, it produces information, which in turn enables internal structuring, functional interaction, and synergetic behavior [50
In the second part of the last century, various fundamental systems theories were developed, such as (i) complex systems theories, (ii) fuzzy system theories, (iii) chaos system theories, (iv) intelligent system theories, and (v) abstract system theories, to explain the essence, order, and cognitive capabilities of systems [51
]. These theories point at the fact that there is a relationship between the paradigmatic features of engineered systems and the characteristics of knowledge they are able to handle and process [52
]. Figure 1
shows the most important dimensions of opposing paradigmatic features. As an arbitrary example, system knowledge can be static, but it can also be dynamic (even emerging or evolving) in other cases. The paradigmatic features may show a gradation between the extremes on a line. For instance, between fully physical systems and fully cyber systems reside software systems, which show both physical (stored software) and cyber (digital code) features. Eventually, each body of system knowledge represents a particular composition of paradigmatic features, and vice versa. For instance, it needs a dedicated body of knowledge to establish a concrete, open, finite, dynamic, non-linear, tasks-specific, continuous, intellectualized, self-adaptive, plural, cyber-physical system of systems. Obviously, this introduces a complexity challenge at discussing the characteristics and the types of system knowledge.
What is the corpus of system knowledge? Systems knowledge is not about the knowledge that is needed to specify, design, implement, use, and recycle systems. Instead, it is the knowledge that is needed by systems to achieve the operational purposes or objectives, or in other words, it is the knowledge needed to function. That is, system knowledge is identical neither with engineering knowledge (the knowledge of making) [53
], nor with technological knowledge (the knowledge of enabling) [54
], though some elements of both are present in systems knowledge. System knowledge is a cognitive technology knowledge including (i) chunks and bodies of generic scientific knowledge (facts, definitions, and theories), (ii) specialized professional knowledge (principles, heuristics, and experiences), and (iii) every-day common knowledge (rule of thumbs). This purpose-driven knowledge may be implanted into a system by its developers or acquired by the system itself using its own resources, during its lifetime, while the system has been operating.
In a general sense, every system is designed and implemented to serve some predefined purposes and to fulfil tasks—that is, solving concrete problems. Therefore, system knowledge is viewed in our study as problem-solving knowledge (no matter what the notions “solving” and “problem” do actually mean in various contexts). Jonassen, D. suggested that success in problem-solving depends on a combination of (i) strong domain knowledge, (ii) knowledge of problem-solving strategies, and (iii) attitudinal components [55
]. Anderson, J.D. approached this issue from a cognitive perspective, in which all components of knowledge play complementary roles [56
]. He grouped the components needed to solve problems broadly into (i) factual (declarative), (ii) reasoning (procedural), and (iii) regulatory (metacognitive) knowledge, complemented with other elements of competence such as skills, experiences, and collaboration. The knowledge needed to solve complicated problems is complex in itself. It is a blend of problem-solving affordances rather than a roadmap or a receipt of arriving at a solution.
The lack of natural understanding of the meaning of the possessed knowledge by the systems themselves and the need to purposefully infer from computable representations put system knowledge into the position of context-dependently actionable potential. In real life, problem-solving is often a trial-and-error process rather than a well-defined puzzle-solving. In addition, both the complex problem to be solved and the problem-solving process may feature emergence, which can be combined with changes in the context. This typically makes (i) the knowledge imperfect and incomplete for the given purpose, (ii) the knowledge use process non-linear, and (iii) the needed reasoning process non-monotonic. From a cognitive perspective, the components of knowledge needed to solve such problems can be broadly grouped into (i) factual (declarative), (ii) conceptual (hypothetical), (iii) reasoning (procedural), and (iv) regulatory (metacognitive) knowledge/skills, and all play complementary roles. Friege, G. and Lind, G. reported that conceptual knowledge and problem scheme knowledge are excellent predictors of problem-solving performance [57
]. As presented and evaluated for strengths and limitations by Goicoechea, A., there are six main methods known for reasoning with imperfect knowledge: (i) Bayes theory-based, (ii) Dempster-Shafer theory-based, (iii) fuzzy set theory-based, (iv) measure of (dis)belief theory-based, (v) inductive probabilities-based, and (vi) non-monotonic reasoning-based methods [58
2.4. Categories of System Knowledge
Manifestation of knowledge is the primary basis for any classification. At large, it is about the roles of knowledge in knowing and reasoning. Systems knowledge may manifest in largely different forms [59
]. Based on the literature (i) declarative knowledge (which captures descriptors and attributes of facts, concepts, and objects), (ii) structural knowledge (which establishes semantic relations between facts, concepts, and objects, (iii) procedural knowledge (which captures the know-how of doing or making something), (iv) abstracted knowledge (which generalizes both “what is” and “how to” types of knowledge within or over contexts, (v) heuristic knowledge (which represents intuitive, emergent, uncertain, and/or incomplete knowledge in a context, and (vi) meta knowledge (which apprehends wisdom and decisional knowledge about other types of knowledge) are differentiated as the main categories of system knowledge manifestation. These categories may include specific types of knowledge, which can be implanted in systems by knowledge engineers and/or generated or acquired by the systems. Particular knowledge representation schemes are applied to each abovementioned forms of manifestation [60
]. When represented for computer processing, system knowledge is decomposed to primitives and thereby its synergy (or compositionality) is often lost.
Systems knowledge reflects intentional, functional, and normative aspects, and creates a dialogue between probabilities and possibilities. As discussed above, it involves “knowing what”, “knowing how”, and often also “knowing why” elements. An assumption is that the knowledge needed by systems for problem-solving can be categorized in the analogy of the knowledge employed in human problem-solving. Solaz-Portolés, J.J. and Sanjosé, V.C. identified the types of system knowledge used in concrete problem-solving and discussed how they affect the performance of problem-solvers. From the perspective of computational handling of problems, they identified three types of knowledge: (i) problem-relevant (context) knowledge, (ii) problem-scheme (framework) knowledge, and (iii) problem-solving (content) knowledge. The types of knowledge typically involved in scientific problem-solving are (i) declarative, (ii) procedural, (iii) schematic, (iv) strategic, (v) situational, (vi) metacognitive, and (vii) problem-translating knowledge [61
System developers and knowledge engineers agree that problem-solving knowledge possessed by systems is almost never perfect. It means that systems should, more often than not, reason with imperfect knowledge. As mentioned by Reed, S.K. and Pease, A., the goal for all descriptive reasoning models is to account for imperfections [62
]. Imperfection of knowledge may be caused by incompleteness, ambiguity, conditionality, contradiction, fragmentation, inertness (relevance), misclassification, or uncertainty. Based on the general classification of information and systems proposed by Valdma, M., a categorization of non-deterministic system knowledge is shown in Figure 2
]. Though AI technologies offer possibilities for handling non-deterministic knowledge, many researchers agree that we are still far from the state when AI research will be able to build truly human-like intuitive systems. On the other hand, many engineered systems benefit from the use of AI technologies and methods, which lend themselves to a variety of enabling reasoning mechanisms [64
2.5. The Main Forms and Mechanisms of Knowledge Inferring/Reasoning
Research in knowledge inferring/reasoning faces one fundamental question. It was discussed above that all forms of knowledge are based typically on data constructs, data streams, data patterns, and/or data models. Information is patterned data and knowledge is the derived capability to act. Thus, the question is: How can a system synthesize functional and operational information from data and how can it infer knowledge and reason with it on its own? [65
]. Humans are equipped with the formidable power of abstraction and imagination, but what does this mean in the context of intellectualized systems? [66
]. How can they replicate human reasoning as artificial reasoning? Scientists of cognition constructed descriptive models of how people reason, whereas information scientists proposed prescriptive models to support human reasoning [67
]. Many elements of a sufficing explanation have been mentioned in the literature, but providing an exhaustive answer still needs further efforts. As main forms of knowledge inferring/reasoning, both non-ampliative and ampliative mechanisms have been proposed over the last decades. Non-ampliative mechanisms are such as: (i) classification (placing into groups/classes), searching/looking up (selecting from a bulk), and contextualization (appending application context). There is a clearly observable progression concerning the implementation of truly ampliative mechanisms. These are such as (i) fusion, (ii) inferring, (iii) reasoning, (iv) abstraction, (v) learning, and (vi) adaptation of knowledge. Inferring is the operation of deriving information in context [68
], reasoning is about synthesis of knowledge [70
]. Inferences are not always made explicit in the process, but they serve as invisible connectors between the claims in the argument.
The term “reasoning” is used in both a narrower and a broader meaning. In its narrower meaning, the term refers to all ampliative computational mechanisms, which formally manipulate a finite set of the symbols representing a collection of believed propositions to produce representations of new ones. As stated by Krawczyk, C., reasoning has three important core features in this narrower context: (i) moves from multiple inputs to a single output, which can be a conclusion or an action, (ii) involves multiple steps and different ways through a state space to achieve final outcome, and (iii) operationalizes a mixture of objectives, previous knowledge, novel information, and dynamic contexts, which, however, depend on the type and state of the problem [72
]. As part of computational reasoning, automated reasoning operates on knowledge primitives and tries to combine them based on explicit or implicit algorithms. Reasoning mechanisms can basically be (i) quantitatively based, (ii) qualitatively based, and (iii) hybrid-based reasoning. Quantitative reasoning is arithmetic evaluation. Qualitative reasoning is a form of calculation, not unlike arithmetic, but over symbols standing for propositions rather than numbers. In its ambiguous and non-trivial broader meaning, reasoning refers to a complex and intricate cognitive phenomenon, which is more than just formal application logics—it extends to semantics, pragmatics, and even apobetics of human intelligence. Because of its broad notional coverage, reasoning has become a complex and multidisciplinary area of study [73
Studied not only from a computational point of view, but also from a philosophical point of view, reasoning has four generic patterns: (i) inductive reasoning, (ii) deductive reasoning, (iii) abductive, and retrospective reasoning [74
]. Inductive reasoning is a method of reasoning in which the stated premises are viewed as sources of some evidence, but not full assurance, for the truth of the conclusion [75
]. Inductive reasoning progresses from individual cases towards generalizable statements. Inductive inferences are all based on reasonable probability, not on absolute logical certainty. Having syllogism as the cognitive pattern, deductive reasoning starts out with a general statement or a hypothesis believed to be true and examines the possibilities to reach a correct logical conclusion. Johnson-Laird, P.N. proposed three theories of deductive performance: (i) deduction as a process based on factual knowledge, (ii) deduction as a formal, syntactic process, and (iii) deduction as a semantic process based on mental models [76
]. He also distinguished (i) lexical, (ii) propositional, and (iii) quantified deduction [77
Abductive reasoning allows finding the preconditions from the consequences of this phenomenon and inferring a best matching explanation for the phenomenon [78
] Abduction makes a probable conclusion from what is known. Hirata, K. proposed a classification of abductive reasoning approaches: (i) theory/rule-selection abduction, (ii) theory/rule-finding abduction, (iii) theory/rule-generation abduction, based on the state of availability of the elements of the explanatory knowledge [79
]. A central tenet of retrospective reasoning is causation, that is, argumentation with past causes as evidence for current effects [80
]. The ability of retrospective reasoning is achieved by maintaining the evolutional information of a knowledge system. Discussed by Rollier, B. and Turner, J.A., retrospective thinking is utilized by designers and other creative individuals when they mentally envision the object they wish to create and then think about how it might be constructed [81
]. Heuristic reasoning received a lot of attention also in artificial intelligence research, and resulted in various theories of reasoning about uncertainly [83
]. Treur, J. interpreted heuristic reasoning as strategic reasoning [84
]. Schittkowski, K. proposed to include heuristic knowledge of experts through heuristic reasoning in mathematical programming of software tools [85
Historically, five major families of ampliative computational mechanisms have been evolving. First, symbolist approaches, such as (i) imperative programming language-based (procedures-based) reasoning, (ii) declarative logical languages-based reasoning, (iii) propositional logic-based inferring, (iv) production rule-based inferring, (v) decision table/tree-based inferring. The literature discusses several modes of logical reasoning, such as: (i) deductive, (ii) inductive, (iii) abductive, and (iv) retrospective modes. Second, analogist approaches, such as (i) process-based reasoning, (ii) qualitative physics-based reasoning, (iii) case-based reasoning, (iv) analogical (natural analogy-based) reasoning, (v) temporal (time-based) reasoning, (vi) pattern-based reasoning, and (vii) similarity-based reasoning. Third, probabilistic approaches, such as: (i) Bayesians reasoning, (ii) fuzzy reasoning, (iii) non-monotonic logic, and (iv) heuristic reasoning. Fourth, evolutionist approaches, such as: (i) genetic algorithms, (ii) bio-mimicry techniques, and (iii) self-adaptation-based techniques. Fifth, connectionist approaches, such as: (i) semantic network-based, (ii) swallow-learning neural networks, (iii) smart multi-agent networks, (iv) deep-learning neural networks, (v) convolutional neural networks, and (iv) extreme neural networks.
2.6. The Main Functions of Knowledge Engineering and Management
A rapidly extending domain of artificial intelligence research is knowledge engineering (KE) [86
]. Its major purposes are: (i) aggregation and structuring raw knowledge, (ii) filtering and construction intellect for problem-solving, and (iii) modelling the actions, behavior, expertise, and decision making of humans in a domain. It considers the objective and structure of a task to identify how a solution or decision is reached. This differentiates it from knowledge management (KM) [87
], which is the conscious process of defining, structuring, retaining, and sharing the knowledge and experience of employees within an organization in order to improve efficiency and to save resources. Knowledge management has emerged as an important field for practice and research in information systems [88
]. Knowledge engineering may target productive organizations (organizational knowledge engineering—OKE) [89
] as well as engineering systems (system knowledge engineering—SKE) [90
]. Technically, KE (i) develops problem-solving methods and problem-specific knowledge, (ii) applies various structuring, coding, and documentation schemes, languages, and other formalisms to help symbolic, logical, procedural, or cognitive processing of knowledge by computer systems, and (iii) leverages these to solving real-life knowledge-intensive problems. Dedicated SKE technologies are used, which, according to their objective, can be differentiated as system-general and system-specific technologies. Traditional system-general KE is based on the assumption that the descriptive and prescriptive knowledge that a system needs already exists as human knowledge, but it has to be collected, structured, implemented, and tested. In our days, SKE is challenged not only by the amount of knowledge, but also by the complicatedness of the tasks to be solved.
Specifically, (i) system-external, (ii) system-internal, and (iii) combined SKE approaches can be distinguished. Figure 3
shows a graphical comparison of these three approaches. In the case of a system-external approach, the tasks of knowledge creation and treatment are completed by knowledge engineers. Whereas, in the case of a system-internal approach, they are done by the concerned system. The arrows in the sub-figures of Figure 3
indicate the flows of knowledge between the highest-level system-internal, system external, and combined activities of system knowledge engineering. The oppositely pointed arrows in the systelligence blocks symbolizes the circular processing of system knowledge by the ampliative reasoning and learning mechanisms of a system in the context of application.
According to the interpretation of Studer, R. et al., knowledge engineering can procedurally include (i) transfer processes, (ii) modelling processes, and (iii) contextualization processes [86
]. Table 1
provides a restricted overview of the three main classes of knowledge engineering activities: (i) knowledge creation, (ii), knowledge treatment, and (iii) knowledge utilization, which lead to system knowledge. Common system-external methods/techniques of knowledge availing are (i) knowledge base construction, (ii) warehouse/repository construction, (iii) ontology construction, and (iv) knowledge graphs editing. System-internal methods/techniques of creating knowledge are (i) statistical data mining, (ii) knowledge discovery and interpretation, (iii) semantic relation analysis, (iv) pattern recognition and interpretation, (v) logical inferring, (vi) semantic reasoning, (vii) machine learning, (viii) deep learning, (ix) extreme learning, (x) task-specific reasoning, and (xi) causal modelling/inference. Numerous combinations of these external and internal knowledge availing and deriving methods/techniques have been applied in engineered systems so far.
The complex domain of SKE research decomposes to various subdomains such as (i) knowledge transfer (transfer of problem-solving expertise of human experts into a programs), (ii) knowledge representation (using formal models, symbols, structures, heuristics, languages, algorithms), (iii) decision-making (strategies, tactics, and actions of logical and heuristic thinking), (iv) creation of knowledge-based systems (explicit and implicit mechanisms and algorithms). The spectrum of knowledge-based systems has become wide, ranging from expert systems involving a large and expandable knowledge base integrated with a rules engine to unsupervised deep learning mechanisms with replaceable training data sets. Besides diversification, contemporary SKE also faces challenges rooted in (i) the interplay of the goal-setting and goal-keeping in systems, (ii) the dynamic changes in context information, (iii) possible self-adaptation and self-evolution of systems, (iv) the variety of tasks to be addressed in complex real life applications, and (v) the explicit handling of common-sense knowledge that is always present (activated) in the case of human problem-solving. Like in the case of organizations, integration of knowledge into a coherent “wholesome” body of knowledge for engineered systems is a complicated task [91
Human-system dialogue has been proposed by Amiri, S. et al. as specific form of extending system knowledge [92
]. Knowledge representation captures knowledge in many alternative forms, for instance, as (i) propositional rules, (ii) semantic constructs, (iii) procedural scenarios, (iv) image sequences, (v) life video stream, (vi) alphanumerical models, (vii) mathematical expressions/formulas, (viii) relational schemes, (ix) synthesized (understandable) patterns, (x) explicit algorithms, and (xi) implicit algorithms. Independent of which representation is applied, interpretation is needed to make it explicit what concrete knowledge the representation carries. It means that a pattern can be considered to be knowledge only if it conveys extractable meaning that is useful for solving a decisional problem. If this does not apply, then the represented system knowledge degrades to pseudo-knowledge.