Next Article in Journal
Evolution of Toilets Worldwide through the Millennia
Next Article in Special Issue
Early Adoption of Innovative Analytical Approach and Its Impact on Organizational Analytics Maturity and Sustainability: A Longitudinal Study from a U.S. Pharmaceutical Company
Previous Article in Journal
Assessment of Environmental and Economic Impacts of Vine-Growing Combining Life Cycle Assessment, Life Cycle Costing and Multicriterial Analysis
Previous Article in Special Issue
Network Analysis of Open Innovation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Not Deep Learning but Autonomous Learning of Open Innovation for Sustainable Artificial Intelligence

1
Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333, Techno Jungang Daero, Hyeonpung-Myeon, Dalseong-Gun, Daegu 711-873, Korea
2
Department of Business Administration, Sangji University, 660 Woosan-Dong, Wonju-Si 220-702, Kangwon, Korea
3
School of Civil Engineering and Built Environment, Queensland University of Technology (QUT), 2 George Street, Brisbane, QLD 4001, Australia
*
Author to whom correspondence should be addressed.
Sustainability 2016, 8(8), 797; https://0-doi-org.brum.beds.ac.uk/10.3390/su8080797
Submission received: 25 May 2016 / Revised: 25 July 2016 / Accepted: 2 August 2016 / Published: 13 August 2016

Abstract

:
What do we need for sustainable artificial intelligence that is not harmful but beneficial human life? This paper builds up the interaction model between direct and autonomous learning from the human’s cognitive learning process and firms’ open innovation process. It conceptually establishes a direct and autonomous learning interaction model. The key factor of this model is that the process to respond to entries from external environments through interactions between autonomous learning and direct learning as well as to rearrange internal knowledge is incessant. When autonomous learning happens, the units of knowledge determinations that arise from indirect learning are separated. They induce not only broad autonomous learning made through the horizontal combinations that surpass the combinations that occurred in direct learning but also in-depth autonomous learning made through vertical combinations that appear so that new knowledge is added. The core of the interaction model between direct and autonomous learning is the variability of the boundary between proven knowledge and hypothetical knowledge, limitations in knowledge accumulation, as well as complementarity and conflict between direct and autonomous learning. Therefore, these should be considered when introducing the interaction model between direct and autonomous learning into navigations, cleaning robots, search engines, etc. In addition, we should consider the relationship between direct learning and autonomous learning when building up open innovation strategies and policies.

1. Introduction

Discovery involves collaboration among man’s intellectual activities [1]. That is, new knowledge can be found by having collaboration with the environment or other colleagues [2]. In addition, learning plays a vital role in the development of autonomous agents [3]. The ability to learn new knowledge is one of the major characteristics of humans who are autonomous agents [4]. Humans are able to accumulate knowledge exponentially and conduct various social activities through very large amounts of additional autonomous learning in addition to direct learning. Meanwhile, some researchers present a novel unsupervised learning method for human action categories [5,6,7]. However, because human learning has various nonhomogeneous kinds of learning in itself, the concepts to specify human learning separately are utilized in order to specify human learning differentiated from the foregoing concept. They are direct learning, which means learning by direct teaching, and autonomous learning, which means learning by humans’ recombination of the results of direct learning [8,9,10,11,12,13]. Social learning, which is done with the presence of many individuals, has allowed humans to build up extensive cultural repertories, enabling them to adapt to a wide variety of environmental and social conditions [14]. The Distributed Knowledge Management (DKM) approach or swarm intelligence is also a kind of social or organizational approach [15,16]. The present study includes the learning within the boundaries of organizations like firms in an open regional innovation system [17].
Computers cannot accumulate additional knowledge that surpasses their given programming. For computers to accumulate additional knowledge, additional program coding is necessary. The main characteristic of the methods of knowledge accumulation of humans is autonomous learning, which is sometimes differently called implicit learning, solving a problem, or introspective measures of learnings [18]. That is, through autonomous learning, each individual accumulates additional knowledge that is beyond the scope of direct teaching. Therefore, if computers can conduct additional autonomous learning that surpasses direct programming similar to humans, they will also be able to accumulate knowledge autonomously similar to humans [19].
What do we need for sustainable artificial intelligence that is not harmful but beneficial for human life?
Is it possible to design an autonomous learning model which is similar to that of humans or a firm? If so, how should they be designed?
The purpose of the present study is to provide an answer to these research questions.
Herbert Simon, who prepared a momentum of epoch-making development of computer engineering by applying the bounded rationality model of humans to artificial intelligence, also formed a foundation for the development of computer engineering through the modeling of human rationality [20,21,22,23,24]. The present study is intended to construct a knowledge learning model through the modeling of humans’ or firms’ way of learning knowledge in open innovation condition such as “learning from open innovation”, or “learning at the boundaries in an open regional innovation system” [17,25,26]. The value of this study is high because autonomous learning is required in many business areas, such as intelligent robots, autonomous vehicles, next-generation individual smartphones, or open innovation strategy and business model building as a core function of products [27].
In this study, the concept model is developed by reviewing the previous studies first and brainstorming about the humans’ cognitive learning processes and firms’ learning processes during open innovation [28]. Second, we construct the causal model of the interaction model between direct and autonomous learning (IMBDAL) and develop a mathematical model for it. Third, we discover cases that would be the targets of application of IMBDAL, thereby securing the validity of the model as well as fixing the value and limitation of the model.
This study treats not learning but open innovation learning. It means that learning of open innovation is the target of this paper.

2. Model Building

2.1. Basic Concept Modeling of Human and Firm Learning

Human learning covers a broad range of learning theories and key perspectives on learning related to education, including behaviorist, cognitive, social cognitive, contextual, and developmental theories, always highlighting relationships between concepts [29]. The theory of direct perception holds that perception is specific to properties of ambient energy arrays [14]. However, the propensity of learners for autonomous learning is a function of the development of cognitive and metacognitive abilities for: (a) processing, planning, and regulating earning activities; and (b) controlling and regulating affect and motivation [30].
In human learning, “proven knowledge” is established through direct learning through knowledge directly inputted from the external environment through various ways and channels, and “confirmed information” is established from proven knowledge through the combinations of the pieces of such knowledge by conducting activities in line with human consciousness like Figure 1.
Thereafter, “hypothetical knowledge” is obtained through the various recombinations of these pieces of information in the process of autonomous learning. This hypothetical knowledge is then converted into “non-confirmed information”, which has certain branches through the interactions between environments and humans [31,32].
Non-confirmed information acts as a core element of proven knowledge creation through direct learning. The learning in human cognitive processes is composed of loops of direct learning and autonomous learning. In this learning process, proven knowledge and hypothetical knowledge are converted into confirmed information and non-confirmed information, respectively, and the boundaries between the two are relatively set so that they can circulate [1,8].
Human beings complete direct learning by going through kindergarten, elementary school, secondary education, including high school, and higher education including undergraduate and graduate school. However, they continue direct learning for life through various channels, such as work life, travel, hobbies, and social relationships. In line with this, a considerable or major part of the knowledge accumulated by humans, utilized as the basis of various decision-making processes and behaviors in life, is in fact derived from confirmed information, which is the result of direct learning. It also undergoes autonomous learning to create hypothetical knowledge and additionally obtains non-confirmed information. However, the knowledge or information accumulated by computers is limited to confirmed knowledge that has been entered in the form of programs that have been internally completed through direct learning, that is, direct coding processes. At present, computers, in particular, absolutely lack the autonomous learning process similar to those of humans.
Humans’ main direct learning is conducted mostly at similar levels and amounts. However, many pieces of knowledge that determine the amounts and qualities of individual humans’ decision-making and behaviors are of fundamental differences because of the variations among individuals, which are made through autonomous learning that surpasses humans’ direct learning.

2.2. Open the Black Box of Autonomous Learning

With regard to humans’ learning, there have been previous studies, such as those on adaptive network models of human learning to solve some of the perennial problems of theoretical psychology [33]. To develop new machine learning models, the present study utilizes only the concept of this theory to infer the characteristics of human learning in order to establish conceptual models. Autonomous learning separates existing subjects and predicates from the four pairs of confirmed information—Aa, Bb, Cc, and Dd—and adds possible subject–predicate networks to derive additional 12 pairs of non-confirmed information—Ab, Ac, Ad, Ba, Bc, Bd, Ca, Cb, Cd, Da, Db, and Dc.
In line with this, before a full-scale discussion of autonomous learning, the characteristics of computer programs that correspond to subject–predicate combinations should be established. Both Java and C++, which are object-oriented languages, have program structures that include subjects–predicates. Therefore, through the autonomous learning processes, additional knowledge can be established from programs as shown in Figure 2.
With regard to the relationship between proven knowledge and hypothetical knowledge, first, as shown in Figure 3, the differences between proven knowledge and hypothetical knowledge are not fixed, but their positions can be flexibly changed through the interactions with the environment, such as decision-making and actions. However, attention should be paid to the fact that hypothetical knowledge is utilized as a foundation for new decision-making and actions. If this is proven through the interactions with the external environment, it will be proven knowledge; that is, proven knowledge has uncertainty. In particular, through autonomous learning, proven knowledge is not accumulated, but the dynamics of proven and hypothetical knowledge come to occupy the major learning processes [3].
Second, attention should be paid to the fact that the basis of decision-making and actions is never proven knowledge but is new uncertain knowledge; that is, hypothetical knowledge in most cases. Attention should be also given to the fact that the ground for humans’ creative activities is not proven knowledge but hypothetical knowledge.
Third, hypothetical knowledge is verified as ex post facto through decision-making and actions to become the basis of proven knowledge.
Fourth, proven knowledge, which gradually increases hypothetical knowledge that is suitable for the exponentially increasing number of networks, is obtained through autonomous learning.

2.3. The Relationship between Open Innovation and Autonomous Learning

As shown in Figure 4, open innovation paradigm approaches to external ideas and knowledge activate autonomous learning, thereby actively increasing hypothetical knowledge all the more [2,27]. Therefore, based on this knowledge, more creative decision-making and other activities that have not been conducted previously come to be actively carried out. This is to say that open innovation can be characterized by the increase in the decision-making processes and activities based on hypothetical knowledge despite the fact that these decision-making processes and activities had not been conducted previously [34,35,36].
On the other hand, closed innovation approaches to innovation that are blocked from external ideas and knowledge, as shown in Figure 5, activate direct learning to increase proven knowledge through direct learning [4,28]. Therefore, this knowledge is more efficient for decision-making and actions in accordance with the patterns that are already established. That is, closed innovation can be characterized through the efficient implementation of the decision-making processes and actions that are already established based on proven knowledge.
Therefore, the setting of the ratio between autonomous learning and direct learning comes to act as a major determinant that identifies the characteristics and scope of learning in artificial intelligence.
The hybrid models that combine these two learning methods, for instance, a learning model CLARION can be proposed [37]. However, the efficiency of hybrid models is not high because hypothetical knowledge and proven knowledge are not in synergic relationships. That is, as far as hypothetical knowledge acts as an input into direct learning and proven knowledge acts as an input into autonomous learning, hybrid models in which both are activated simultaneously cannot be postulated. Of course, because both set the condition of each other, even if a condition in which a certain learning is to be activated more intensively has been established, both act as limitations and teach other in certain activities, thereby limiting their individual growths.
Figure 4 shows the high ratio of open innovation, while Figure 5 shows the high ratio of closed innovation.

3. Expansion of Autonomous Learning

3.1. First Expansion of Autonomous Learning: Denial of Differentiation between Subjects and Predicates

In line with this, before developing learning models, the absoluteness of the differentiation between subjects and predicates should be thought of. That is, although no additional information is provided through the links between self-nodes, the qualitative differences between the knowledge provided by the relationship between self-nodes and other subject nodes and the knowledge provided by the relationship between subject nodes and predicate nodes cannot be clearly established. That is, subjects and predicates are not qualitatively differentiated completely. In particular, the basic element of knowledge production is the combination of at least two nodes. However, the combinations of self-nodes are self-evident, and no additional knowledge is provided.
In Figure 2, in situations in which there are four pieces of direct learning—A-a, B-b, C-c, and D-d—if the qualitative differences between subject and predicate nodes are not acknowledged, direct learning will be defined as A➜a, B➜b, C➜c, and D➜d, whereas the 12 pieces of autonomous learning that will first occur as the 12 pieces in Figure 2 from the additional links between subjects and predicates have directional links.
However, if the homogeneity between subjects and predicates is postulated, first, the 12 pieces of autonomous learning will additionally occur from the links among the existing subjects. They are A➜B, A➜C, A➜D, B➜A, B➜C, B➜D, C➜A, C➜B, C➜D, D➜A, D➜B, and D➜C. Second, 12 pieces of autonomous learning will occur from the links among the existing predicates. They are a➜b, a➜c, a➜d, b➜a, b➜c, b➜d, c➜a, c➜b, c➜d, d➜a, d➜b, and d➜c. Third, 16 pieces of autonomous learning from the links among the predicates, and the subjects will be added. They are a➜A, a➜B, a➜C, a➜D, b➜A, b➜B, b➜C, b➜D, c➜A, c➜B, c➜C, c➜D, d➜A, d➜B, d➜C, and d➜D. That is, if the differentiation between the subject nodes and the predicate nodes is denied, in addition to the existing 12 pieces of autonomous learning, 40 additional pieces of autonomous learning will occur from direct learning in Figure 2.
The major conditions and results of the first expansion of autonomous learning can be summarized as follows.
First, non-directional links are changed into directional links. When subjects and predicates are distinguished from each other, directional links per se are not meaningless in situations in which subjects and predicates are specified. However, when the two are not qualitatively distinguished from each other, directional links appear.
Second, when each node is linked to itself, the value of learning is defined as 0 because no additional knowledge is provided. This is interpreted as non-occurrence of learning.
Third, more pieces of autonomous learning occur than when subjects and predicates are distinguished from each other. Because of the additional quantity of autonomous learning, 52 pieces of autonomous learning occur in the four pieces of direct learning in which eight different nodes are linked to each other. This is different from the 12 pieces of autonomous learning based on the four different subjects and four different predicates in Figure 2.

3.2. Second Expansion of Autonomous Learning: Vertical Expansion

In the discussion of Figure 2, the first expansion of autonomous learning that denies the differentiation between subjects and predicates has the nature of horizontal expansion. That is, it includes the knowledge made through additional combinations between the two nodes as the targets of autonomous learning.
However, two kinds of learning with natures contradictory to each other exist in autonomous learning. Of course, although the two may coexist in reality, refined models for them should be developed because if the directions and effects of learning are different from each other, their results will also be different. That is, the refined models are new forms of combinations among nodes, that is, the occurrence of additional pieces of autonomous learning brought about by increases in the number of links resulting from the addition of autonomous learning, which have the same number of links to the given number of nodes.
As shown in Figure 2, 52 pieces of autonomous learning occurred in the first expansion. If the number of links is expanded to two from the existing one link as vertical expansion, 52 × 6 = 312 pieces of autonomous learning will occur because six different nodes can be added to each of the 52 pieces of autonomous learning. If the number of links is expanded to three, 312 × 5 pieces of autonomous learning will occur and if the number of links expanded to four, five, six, or seven, 312 × 5 × 4, 312 × 5 × 4 × 3, 312 × 5 × 4 × 3 × 2, or 312 × 5 × 4 × 3 × 2 × 1, respectively, pieces of autonomous learning will occur.
This vertical autonomous learning has the following characteristics.
First, the repeated appearances of the same node are ruled out. Autonomous learning is based on the addition of knowledge of individual nodes and the knowledge made by new combinations of the new added nodes. Therefore, the vertical expansion of certain autonomous learning is characterized by the circulations of different nodes through different directional links and nodes linked with each other without repetition.
Second, whereas horizontal expansion is the subject of the first expansion, vertical depth expansion is the subject of the second expansion. Horizontal expansion and vertical expansion are distinguished by the numbers of nodes and links of direct learning. That is, the cases in which knowledge is made through new combinations, whereas the same numbers of nodes and links as those of direction learning correspond to horizontal expansion, which is the first expansion of autonomous learning. On the other hand, the cases in which additional pieces of knowledge are made based on the increases in the numbers of nodes and links correspond to vertical expansion, which is the target of the second expansion of autonomous learning.

4. Key Characteristics of IMBDAL: Autonomy Invades Certainty

In the process of the dynamic development of human recognition, the development of autonomy brings about the decline of certainty, and these two progress in opposite directions [37]. Of course, because of the limited growth of autonomy, the decline of certainty also has a certain lower limit in terms of value.
Incidentally, these limits are reciprocal. That is, autonomous learning cannot grow infinitely because it is limited by direct learning. In line with this, direct learning cannot grow infinitely either because it is limited by autonomous learners. If these situations are substituted by autonomy and certainty, the growth of autonomy is limited by the minimum certainty, and the growth of certainty is limited by a certain degree of autonomy. Autonomy cannot completely sacrifice certainty, and certainty cannot entirely sacrifice autonomy.
If this discussion is applied to the accumulation of knowledge in autonomous learning models, the existence of knowledge growing, as shown in Figure 6, is concluded.
The increase model in which hypothetical knowledge 1 increases based on proven knowledge 1, proven knowledge 2 increases based on hypothetical knowledge 1, and hypothetical knowledge 2 increases based on proven knowledge 2 and does not lead to exponential divergent increases. It eventually leads to an S-shaped convergent growth. This is in the same form as the convergent knowledge increase in human autonomous learning. This makes us expect the same form of growth as the basic model of knowledge diffusion [38].
Therefore, if this mechanism is introduced into autonomous learning, autonomous learning will limit direct learning to bring about great changes to the possibility of unlimited expansion of direct learning in existing computer systems. If the mechanism of autonomous learning is introduced into computer programming or machine learning, the same limit of direct learning as that in human learning will be introduced into machine learning systems. Therefore, selectively constructing large-scale direct learning systems and autonomous learning systems based on their own functions and purposes is reasonable.

5. Mathematical Modeling

5.1. Causal Modeling of IMBDAL

The causal model of interaction between direct and autonomous model is explained in Figure 7. There exist three important characteristics in the interactions between autonomous learning (AL) and direct learning (DL).
These are “AL and DL reinforcing each other and existence of capacity limit)”, “AL and DL Synergy (synergic to each other)”, and “AL and DL conflict (conflict to each other)”. They are also discussed more deeply.

5.1.1. AL and DL Self-Reinforcement and Existence of Capacity Limit

There are three reinforcing loops of “DL self-reinforcement,” such as (R-DL-1), (R-DL-2), and (R-DL-3). (R-DL-1) is a reinforcing loop between DL and direct learning knowledge system (DLKS) (knowledge stock from direct learning). (R-DL-2) is a reinforcing loop between DL and Total KS (total knowledge stock). Total knowledge stock is assumed to be the sum of DLKS and autonomous learning knowledge system (ALKS) (knowledge stock from autonomous learning). The amount of direct learning (DL) is determined by DL effort and DL effectiveness.
  • (Round(R)-DL-1) DL Self-reinforcement from “DLKS” growth: DL↑ ➜ DLKS↑ ➜ DL Effectiveness↑ ➜ DL↑
  • (R-DL-2) DL Self-reinforcement from “Total Knowledge Stock”: DL↑ ➜ DLKS↑ ➜ Total KS↑ ➜ DL Effectiveness↑ ➜ DL↑
The reasons why we distinguish (R-DL-1) and (R-DL-2) are as follows. We expect that the coefficient so to say DLKS in the relationship “from DLKS to DL effectiveness” and the coefficient (let us say, c-Total KS) in the relationship “from Total KS to DL effectiveness” can be different with each other. In general, it is expected that DL effectiveness will be more greatly affected by DLKS than the Total KS. However, there can be other possibilities also. First, if “c-total KS” is greater than “c-DLKS”, it means DL Effectiveness can be improved much more by the Total KS rather than by DLKS. As such, it means AL and ALKS have more positive effects on DL effectiveness. Second, “c-DLKS” can be lowered to almost zero level as “DLKS” gets close to the “DL capacity limit”. This is because the efficiency of DLKS’s impact onto DL Effectiveness is expected to lower as DLKS grows near to the DL capacity limit.
Distinguished from (R-DL-1) and (R-DL-2), (R-DL-3) is a reinforcing loop between DL and DL propensity. Here, the amount of knowledge stock is not directly related.
  • (R-DL-3) DL Self-reinforcement from “DL Propensity (or DL Inertia)”: DL↑ ➜ DL Propensity↑ ➜ DL Effort↑ ➜ DL↑
DL propensity is the inertia or tendency to keep the current learning method. As such, DL propensity means “behavioral inertia” on DL. It is because individuals or organizations have the tendency to keep more easily doing “what they have been doing” because it is familiar and behaviorally easy.
There are also three very similar AL self-reinforcement loops like DL, such as (R-AL-1), (R-AL-2), and (R-AL-3). The clearest difference between DL and AL self-reinforcement loops is the existence of “DLKS capacity limit” DLKS has a capacity limit, and it is expected that as DLKS grows to the capacity limit, and its impact on DL effectiveness will be slowed down to zero level. However, we anticipate that such a limit in AL and ALKS does not exist. As such, in a long term, ALKS can achieve much more growth level rather than DLKS.
The existence of the three strong reinforcing loops in AL and DL makes us anticipate that there will be a “critical mass” in the learning process and the accumulation of knowledge stock. The “critical mass” in the learning process and the accumulation of knowledge stock is anticipated to exist in all kind learning and knowledge stock process, such as individual, organizational, and national economy-level learning and knowledge accumulation.
If a knowledge stock starts to grow, whether AL or DL, it follows the reinforcing loop growth pattern. Thus, although learning and knowledge stock is small at the early stage, if only it reaches the “critical mass,” its growth can increase in a very fast pace. This kind of phenomena has been observed in many cases of the growth of economic knowledge stock, such as in Korea and recently in China.
The most important is that, because of the “capacity limit” that restricts the long-term growth in DLKS, for an economy continue to further grow, AL is inevitable.

5.1.2. AL and DL Synergy

Synergy exists between AL and DL, which is caused by the “Total KS” growth. This is (R-Synergy).
  • (R-Synergy) AL and DL Synergy from “Total KS” Growth: DL↑ ➜ DLKS↑ ➜ Total KS↑ ➜ AL Effectiveness↑ ➜ AL↑ ➜ ALKS↑ ➜ Total KS↑ ➜ DL Effectiveness↑ ➜ DL↑
(R-Synergy) is a reinforcing loop that comes from the combination of (R-AL-2) and (R-DL-2). (R-Synergy) have “8” shapes, and it comes from the interaction of the Total KS with each learning method of AL and DL. It is not important where we start our learning from AL or DL. Regardless of the starting point, if the Total KS starts to grow at some level, the other learning methods can also be helped much.
However, if we take into account the possibility of external resource and external help, then DL can be more effective in early stage of KS building.

5.1.3. AL and DL Conflict

A conflict between AL and DL also exists because each comes from “resource constraint” and “AL or DL propensity”.
As explained, the propensity is the behavioral inertia in learning, and it can be defined as “learning inertia”.
As the dependency on the single learning method grows (AL or DL), the propensity or inertia onto that learning method is reinforced, and it restricts the usage of the other learning method because of the resource constraint. In fact, if there were an infinite resource, it would not be a problem.
There are two types of AL and DL conflict, (R-Conflict1), and (R-Conflict2). In addition, they are “reinforcing loops” that makes “the rich richer and the poor poorer.” It means that if just one of the AL or DL is going to be mainly used, the other learning method will be less and less used.
  • (R-Conflict1) DL↑ ➜ DLP (DL Propensity) ↑ ➜ ALP↓ ➜ AL Effort↓ ➜ AL↓ ➜ ALP↓ ➜ DLP↑ ➜ DL Effort↑ ➜ DL↑ (not through KS but through propensity)
  • (R-Conflict1) DL↑ ➜ DLP (DL Propensity) ↑ ➜ ALP↓ ➜ AL Effort↓ ➜ AL↓ ➜ ALP↓ ➜ DLP↑ ➜ DL Effort↑ ➜ DL↑ (not through KS but through propensity)
Because AL and DL conflict each other, the excessive reliance on single learning method can harm the usage of the other method, and as a result, the total long-term outcomes in knowledge stock can be harmed. This means that these conflicts are the “reinforcing loops” that makes “the rich richer and the poor poorer”. This means, without proper policy intervention, as time goes on, just one of the AL or DL is going to be mainly used, while the other learning method will be less and less used.
In addition, the excessive reliance on single learning method can also harm itself in a long term through (B-Self-Conflict) loop. This is a strong balancing loop that can harm the entire learning process through self-restriction.
  • (B-Self-Conflict) DL↑ ➜ DLP↑ ➜ ALP↓ ➜ AL Effort↓ ➜ AL↓ ➜ ALKS↓ ➜ Total KS↓ ➜ DL Effectiveness↓ ➜ DL↓ (It also exists the same in the AL part.)
This “self-conflict” means: First, the excessive reliance on single learning method can harm the Total KS that is built significantly. Second, strong restricting loops exist because of the reliance on single learning method. We should be cautious so that excessive reliance on single learning method will not negatively affect the entire learning process.

5.1.4. Possibility of External Resource Injection

We hereby assume that external resource injection can be used solely for DL because autonomous learning cannot be achieved through external forces or other aids because of its nature. If it is possible that external resource can be used for “DL effort”, it can speed up the entire KS building process by accelerating AL and DL self-reinforcement as well as AL and DL synergy.
Because DL can be boosted through external resource injection, in the early stage of building knowledge stock, it may be proper to use DL more than AL with external resources and aids. However, because DLKS may have the capacity limit, the usage of AL will inevitably boost long-term Total KS growth. We have three necessary considerations, which are as follows:
  • DLP (DL propensity) + ALP (AL propensity) = 1
  • DL effectiveness is expected to decrease with DLKS (or Total KS) growth (DLKS capacity limit).
  • AL effectiveness is expected to maintain or grow with ALKS (or Total KS) growth.

5.2. Mathematical Model Building

5.2.1. Basic Condition for Mathematical Modeling

As described in Figure 1, consider a simple model of the accumulation of knowledge by times whose process consist of two parts, DL and AL. DL creates proven or confirmed knowledge (PK) from the input of hypothetical knowledge (HK), which are created by AL, whereas AL creates HK from PK, which are created by DL.
(a)
We assume that the accumulation of knowledge determines the rewards at each situation, so as the concern about the size of knowledge. Knowledge stock consists of two parts: the sum of PK, i.e., DLKS, as well as the sum of HK, i.e., ALKS. The accumulation of PK also interacts with the accumulation of HK. These are different from the perspective of machine learning in several facts [38].
(b)
The increase of DLKS in accordance with direct learning requires cost in that the difference of input resource based on time is allocated automatically by DL propensity. The output of DL is proportional to the input size of DLKS. The increase of DLKS is finite in each time interval because knowledge creation in finite time cannot be infinite. In addition, there is a memory decrease cycle, DL forget coefficient [39], deleting of old DL, which does not fit with changed environment.
(c)
ALKS has different aspects with DLKS in several factors. Autonomous learning offers from horizontal and vertical expansion of DLKS. The creation of knowledge by AL is much bigger than that of DL, but the increase of ALKS is finite in each time interval as in DLKS. This process requires cost of injecting resource at each time step which is allocated by AL propensity. DL helps AL since AL increases by proportional to DLKS. In addition, DL conflicts with AL in that DL propensity + AL propensity = 1, which means that if DL propensity increases, then AL propensity decreases. There is also a memory decrease cycle based on AL forget coefficient.
(d)
At each time step, resources are divided into two parts by DL propensity and AL propensity whose sum is one. Propensities represent the behavioral inertia which prevents the rapid change from one side to another but continuous success of one side result in the increase of that propensity. Propensities are controlled by the coefficients DL_min and DL_max, which set the minimum and the maximum bound of DL propensity, respectively.

5.2.2. Building up Activating Model

At time t, let the available (or input) resource be R_t.
Resource allocation based on the effort functions is as follows.
Resource R_t is partitioned by DL and AL propensities
  • [DL Effort]_t = DLP_(t-1) × R_t
  • [AL]_t = [AL Effectiveness]_(t-1) × [AL Effort]_t
  • [AL Effort]_t = [ALP]_(t-1) × R_t
The propensities based on the function of behavior inertia are as follows.
  • (DLP_t,ALP_t)= Inertia (∆DL_t,∆AL_t,DLP_(t-1),ALP_(t-1,)Inc_(_min),Inc_(_max),[DL]_(_min), Inc_(_min))
  • DLP_t + ALP_t = 1
  • [DL]_(_min) ≤ DLP_t ≤ [DL]_(_max)
  • 1-[DL]_(_max) ≤ ALP_t ≤ 1 – [DL]_(_min)
  • Inc_(_min) ≤ DLP_t-DLP_(t-1) ≤ Inc_(_max)
The effectiveness (output of knowledge by the unit resource) is as follows.
  • [DL Effectiveness]_t = DL Effective(ALKS_t,DL_(_coefficients))
Knowledge creation based DL in unit time step has the upper bound.
  • [AL Effectiveness]_t = AL Effective(DLKS_t,AL_(_coefficients))
Knowledge creation based on AL in unit time step has the upper bound that is much bigger than that of DL.
The learning functions for direct learning and autonomous learning are as follows.
  • DLt = DL Effectivenesst-1 × DL Effortt
  • ALt = AL Effectivenesst-1 × AL Effortt
The increase of knowledge stocks is as follows.
  • [DLKS]_t = [DLKS]_(t-1) × (1-DL_(__forget__coefficient)) + [DL]_t
  • [ALKS]_t = [ALKS]_(t-1) × (1-AL_(__forget__coefficient)) + [AL]_t
DL creates proven knowledge, and AL creates hypothetical knowledge.

5.2.3. Simulation Results

If we do not use the logic of resource allocation based on behavior inertia but use the constant propensities (DLPt, ALPt) = (0.5, 0.5), then we have the curves of PK and HK as in Figure 8. As time goes by, the increase of (PK, HK) slows down to zero and (PK, HK) converges to some value.
Now, let us look at the result when we use the logic of resource allocation based on behavior inertia. As we can see in Figure 9, the knowledge sizes of PK and HK have the upper bound respectively.
The result of DL Propensity DLPt and AL Propensity ALPt is also shown in Figure 10. We set the propensity minimum to 0.1, the propensity maximum to 0.9, and the initial values of DLP1 and ALP1 to 0.5. DLP increases to the maximum value where the HK in Figure 10 stays at the local maximum. It stays at it for some time then goes down to 0.5 where DK in Figure 10 has the global maximum, and goes down again to minimum and remains, where DL and HL converge to some values.
In Figure 11, the result of knowledge effectiveness is shown. As defined, DL Effectiveness is the productivity of creating DL from HK by using the unit resource. As times goes on, it increases to the first productivity constant that is set by the initial condition, then goes up to the second productivity constant and so on. The productivity constants of AL effectiveness are set much bigger than those of DL effectiveness.

6. Discussion and Application

6.1. Application to Machine Learning

The areas of machine learning or artificial intelligence to which IMBDAL can be applied are various. In this paper, the areas in which autonomous learning models can be applied immediately at the level of conceptual models will be presented.
First, cleaning robots can be included. The current cleaning robots do not have the action mechanism to change their moving speeds and intensity of suction based on the wastes to be sucked or the characteristics of floors to be cleaned. If autonomous leaning is introduced as per the model in Figure 12, the learning of the cleaning robots will increase as they undergo step n − 1, step n, and step n + 1 so that they can adapt to various types of wastes and floors, and thus, the cleaning efficiency will increase. However, these processes will involve additional costs because the robots’ speeds will be reduced in the process of application of autonomous learning and energy consumption will increase. In addition, if the time in which learning can be applied becomes longer than the normal time assigned to be 80%, the enhancement of cleaning efficiency will reach its limit. That is, when autonomous learning has been introduced into cleaning robots, the cleaning efficiency will not improve infinitely, but there will be a limit. In addition, in this case, the fact that if the pieces of learning in different stages are contradictory to each other, the substitution of eventual learning with the former learning is a conclusion naturally derived based on the logic of autonomous learning. This idea has been already enrolled as a business model patent at Korea patent office.
Second, intelligent navigation systems can be included. In the existing navigation systems, the operation records of vehicles are not accumulated into the navigation database for the next operation. However, in intelligent navigation systems, as shown in Figure 13, information on vehicle operation time and routes is accumulated and provided as the foundation of the next utilization of navigation along with the existing navigation database. In this case, the navigation systems of vehicles that frequently operate in a certain region can accumulate more accurate information suitable for reality, thereby being able to provide customized navigation information. Even with the same navigation system, different pieces of navigation information can be provided depending on the operating regions and time of the person utilizing it. This navigation information is not infinitely individualized, but only those pieces of navigation information update information that are contradictory to the existing database information are additionally accumulated. If the pieces of information to be accumulated are different from the information accumulated in earlier stages, the information in the last stage will prevail. These intelligent navigation systems provide navigation information that fits the individuals’ inclinations but sacrifice the normalized database information. If conflicts between individuals’ pieces of navigation utilization information occur frequently, the individuals’ accumulation of navigation information will be limited; that is, although intelligent navigation systems provide differentiated information, there are limitations in learning, such as limits in deviation from the database and limits in the accumulation of information. This idea is under review process to be enrolled as business model patent at Korea patent office.
Third, intelligent Web search engines can be included. For instance, if three pieces of direct learning occur in stage n − 1 as shown in Figure 14, autonomous learning with a breadth of 27 and a much larger depth will occur. In this situation, the search attempts for stage n will lead to five pieces of direct learning and much wider and deeper autonomous learning. If autonomous learning in stage n − 1 and direct learning in stage n are different from each other, the former relevant autonomous learning will be deleted. Of course, direct learning in the earlier stage that is contradictory to direct learning in the eventual stage and autonomous learning derived from the latter will also be deleted. Direct learning in search engines means cases in which the relevant knowledge was not made by the operation of the search engines but made by information producers and accumulated. These autonomous learning search engines cannot only expand information quantitatively but can also produce qualitatively deeper various pieces of information. Of course, there are limits in quantitative expansion and qualitative accumulation, and the uncertainty of information is not completely relieved in the stage of searching. Web searches for autonomous learning are characterized by the coexistence of small percentages of proven knowledge with uncertain boundaries and high percentages of non-proven knowledge. Therefore, the creative search results and uncertain search results appear simultaneously.

6.2. Discussion from Findings at Simulation

First, autonomous learning can increase much bigger than direct learning because it increases from horizontal or vertical expansion of direct learning without direct learning and autonomous learning have constant propensity or behavior inertia, as seen in Figure 8 and Figure 9. However, autonomous learning cannot increase without enough direct learning. This situation is the same in human learning and firms’ learning. In particular, a firm cannot succeed in creative open innovation without the core capability that is based on internal research and development.
Second, direct and autonomous learning propensities are not constant. Direct learning propensity is high in the early stage, and autonomous learning propensity is high in the late stage, as seen in Figure 10. A firm should invest more resources on accumulating internal capability at early stage, but it invests more resources on open innovation after growing up. The situation is similar in human learning.
Third, direct and autonomous learning neither grow constantly nor smoothly, but tiredly, as seen in Figure 11. Even direct learning in effectiveness grows tiredly, not to talking of autonomous learning. This is same in the firms’ internal research and development. If any firm invests in internal research, the innovation results occur not at once but after a little interval with tired type. Even though any firm does open innovation such as M&A or partnership, the result does not occur at once. The situation is same in human learning.
Fourth, direct and autonomous learnings increase in diminishing amounts in the given time like in Figure 11 and arrive at maximum like in Figure 9 and Figure 10. Infinite increase of knowledge in firms cannot be possible. As such, destructive innovation is not selective but essential for any firm to survive in markets. Human beings also have their maximum in learning even though they can increase their autonomous learning until late.

7. Conclusions and Future Work

IMBDAL presents a newly conceptualized learning model that can be applied to machine learning that is centered on the characteristics of humans’ autonomous learning. This model is presented at the level of concepts and developed as a mathematical model through the causal loop model. However, the targets that may actually be applied to machine learning were derived, and algorithms for autonomous learning models for the relevant targets were developed to verify the characteristics and limitations that appear when the models are applied to actual machine learning in advance. If IMBDAL is applied to machine learning or computer science through this process, the machine learning that satisfies users’ conditions will be deepened depending on the users’ demands and expectations so that the relevant machines or computers are differentiated from other machines or computers that are made similar. They will also accumulate knowledge or information that is specialized for individuals. In addition, if IMBDAL is applied to machine learning or computer science, the characteristics, such as certainty, completeness, and continuous maintenance of the acquired information that have been regarded as the characteristics of machines and computers, will be replaced by uncertainty, non-completeness, and acquired information under the continuous feedback processes of revision, supplementation, addition, and deletion.
In further studies, through simulations in various conditions, this model should be developed to be more sophisticated, and the characteristics and elements of IMBDAL should be presented more clearly. In addition, this model should be directly applied to various kinds of realistic machine learning to enhance the models’ suitability for reality.
In addition, we could understand the essence of open innovation in the learning process from IMBDAL. Open innovation increases the emergence of new knowledge for creativity from the sacrifice of the efficiency of direct learning accumulation. We have to choose the ratio between creativity and efficiency when we build up open innovation strategy for any firm if we follow the implication of IMBDAL. However, we should develop more sophisticated conditions for individual open innovation or closed innovation strategy through simulations in several situations to apply IMBDAL and establish open innovation strategy directly.
Finally, we confess that we studied the open innovation learning which means the common area of open innovation, and learning. Thus, studies on “learning in open innovation” and/or “open innovation in learning” should be researched additionally.

Acknowledgments

This work was supported by the Daegu Gyeongbuk Institute of Science and Technology Research and Development Program of the Ministry of Science, ICT & Future Planning of Korea (16-IT).

Author Contributions

JinHyo Joseph Yun built up research questions, and wrote this paper. Dooseok Lee and Heungju Ahn did mathematical modeling of this paper. Kyungbae Park did causal loop modeling of this paper. Tan Yigitcanlar gave most valuable comments and ideas in developing the paper from early version to totally developed one.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Shen, W.-M. Discovery as autonomous learning from the environment. Mach. Learn. 1993, 12, 143–165. [Google Scholar] [CrossRef]
  2. Jeon, J.-H.; Kim, S.-K.; Koh, J.-H. Historical review on the patterns of open innovation at the national level: The case of the roman period. J. Open Innov. Technol. Mark. Complex. 2015, 1, 1–17. [Google Scholar] [CrossRef]
  3. Dorigo, M.; Colombetti, M. Robot shaping: Developing autonomous agents through learning. Artif. Intell. 1994, 71, 321–370. [Google Scholar] [CrossRef]
  4. Oganisjana, K. Promotion of university students’ collaborative skills in open innovation environment. J. Open Innov. Technol. Mark. Complex. 2015, 1, 1–17. [Google Scholar] [CrossRef]
  5. Figueiredo, M.A.; Jain, A.K. Unsupervised learning of finite mixture models. IEEE Trans. Pattern Anal. 2002, 24, 381–396. [Google Scholar] [CrossRef]
  6. Niebles, J.C.; Wang, H.; Fei-Fei, L. Unsupervised learning of human action categories using spatial–temporal words. Int. J. Comput. Vis. 2008, 79, 299–318. [Google Scholar] [CrossRef]
  7. Saul, L.K.; Roweis, S.T. Think globally, fit locally: Unsupervised learning of low dimensional manifolds. J. Mach. Learn. Res. 2003, 4, 119–155. [Google Scholar]
  8. Corbett, A.T.; Anderson, J.R. Knowledge tracing: Modeling the acquisition of procedural knowledge. User Model. User-Adapt. 1994, 4, 253–278. [Google Scholar] [CrossRef]
  9. Dianyu, Z. English learning strategies and autonomous learning. Foreign Lang. Educ. 2005, 1, 12. [Google Scholar]
  10. Dickinson, L. Talking shop aspects of autonomous learning. ELT J. 1993, 47, 330–336. [Google Scholar]
  11. Jacobs, D.M.; Michaels, C.F. Direct Learning. Ecol. Psychol. 2007, 19, 321–349. [Google Scholar] [CrossRef]
  12. Nunan, D. Towards autonomous learning: Some theoretical, empirical and practical issues. In Taking Control: Autonomy in Language Learning; Pemberton, R., Li, E.S.L., Or, W.W.F., Pierson, H.D., Eds.; Hong Kong University Press: Hong Kong, China, 1996; pp. 13–26. [Google Scholar]
  13. Zhou, D.; DeBrunner, V.E. Novel adaptive nonlinear predistorters based on the direct learning algorithm. IEEE Trans. Signal Process. 2007, 55, 120–133. [Google Scholar] [CrossRef]
  14. Molleman, L.; Van den Berg, P.; Weissing, F.J. Consistent individual differences in human social learning strategies. Nature Communications, 2014, 5, 3570. [Google Scholar] [CrossRef] [PubMed]
  15. Bonifacio, M.; Bouquet, P.; Cuel, R. Knowledge nodes: The building blocks of a distributed approach to knowledge management. J. Univers. Comput. Sci. 2002, 8, 652–661. [Google Scholar]
  16. Gil, A.B.; Peñalvo, F.J.G. Learner course recommendation in e-Learning based on swarm intelligence. J. Univers. Comput. Sci. 2008, 14, 2737–2755. [Google Scholar]
  17. Belussi, F.; Sammarra, A.; Sedita, S.R. Learning at the boundaries in an “Open Regional Innovation System”: A focus on firms’ innovation strategies in the Emilia Romagna life science industry. Res. Policy 2010, 39, 710–721. [Google Scholar] [CrossRef]
  18. Jacoby, L.L. On interpreting the effects of repetition: Solving a problem versus remembering a solution. J. Verb. Learn. Verb. Behav. 1978, 17, 649–667. [Google Scholar] [CrossRef]
  19. McGrath, R.G. Exploratory learning, innovative capacity, and managerial oversight. Acad. Manag. J. 2001, 44, 118–131. [Google Scholar] [CrossRef]
  20. Arthur, W.B. Inductive reasoning and bounded rationality. Am. Econ. Rev. 1994, 84, 406–411. [Google Scholar]
  21. Gigerenzer, G.; Selten, R. Bounded Rationality: The Adaptive Toolbox; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  22. Simon, H.A. Theories of bounded rationality. In Decision and Organization; McGuire, C.B., Radner, R., Eds.; North-Holland Pub. Co.: Amsterdam, The Netherlands, 1972; Volume 1, pp. 161–176. [Google Scholar]
  23. Simon, H.A. Models of Bounded Rationality: Empirically Grounded Economic Reason; MIT Press: Cambridge, MA, USA, 1982. [Google Scholar]
  24. Simon, H.A. Bounded rationality and organizational learning. Organ. Sci. 1991, 2, 125–134. [Google Scholar] [CrossRef]
  25. Love, J.H.; Roper, S.; Vahter, P. Learning from open innovation; CSME Working Paper No. 112; Warwick Business School: Coventry, UK, 2011. [Google Scholar]
  26. Shanks, D.R.; St John, M.F. Characteristics of dissociable human learning systems. Behav. Brain Sci. 2010, 17, 367–395. [Google Scholar] [CrossRef]
  27. Yun, J.J.; Won, D.; Park, K. Dynamics from open innovation to evolutionary change. J. Open Innov. Technol. Mark. Complex. 2016, 2, 1–22. [Google Scholar] [CrossRef]
  28. Kodama, F.; Shibata, T. Demand articulation in the open-innovation paradigm. J. Open Innov. Technol. Mark. Complex. 2015, 1, 1–21. [Google Scholar] [CrossRef]
  29. Ormrod, J.E.; Davis, K.M. Human Learning; Merrill: Princeton, NC, USA, 2004; pp. 1–5. [Google Scholar]
  30. Kessler, G.; Bikowski, D. Developing collaborative autonomous learning abilities in computer mediated language learning: Attention to meaning among students in wiki space. Comput. Assist. Lang. Learn. 2010, 23, 41–58. [Google Scholar] [CrossRef]
  31. Polanyi, M. Personal Knowledge: Towards a Post-Critical Philosophy; University of Chicago Press: Chicago, IL, USA, 2012. [Google Scholar]
  32. Watanabe, Y.; Nishimura, R.; Okada, Y. Confirmed knowledge acquisition using mails posted to a mailing list. In Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP), Jeju Island, Korea, 11–13 October 2005; Dale, R., Wong, K., Su, J., Kwong, O.Y., Eds.; Springer-Verlag: Berlin, Germany; pp. 131–142.
  33. Gluck, M.A.; Bower, G.H. Evaluating an adaptive network model of human learning. J. Mem. Lang. 1988, 27, 166–195. [Google Scholar] [CrossRef]
  34. Chesbrough, H.W. Open Innovation: The New Imperative for Creating and Profiting from Technology; Harvard Business Press: Boston, MA, USA, 2003. [Google Scholar]
  35. Chesbrough, H. Open innovation: A new paradigm for understanding industrial innovation. In Open Innovation: Researching a New Paradigm; Chesbrough, H., Vanheverbeke, W., West, J., Eds.; Oxford University Press: Oxford, UK, 2006; pp. 1–12. [Google Scholar]
  36. Gassmann, O.; Enkel, E. Towards a theory of open innovation: Three core process archetypes. In Proceedings of the R&D Management Conference, Lisbon, Portugal, 6 July 2004.
  37. Sun, R.; Peterson, T. Autonomous learning of sequential tasks: Experiments and analyses. IEEE Trans. Neural Netw. 1998, 9, 1217–1234. [Google Scholar] [CrossRef] [PubMed]
  38. Sutton, R.S.; Barto, A.G. Reinforcement Learning: A Bradford Book; The MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  39. Lam, K.C.; Donald, L.; Hu, T. Understanding the effect of the learning-forgetting phenomenon to duration of projects construction. Int. J. Proj. Manag. 2001, 19, 411–420. [Google Scholar] [CrossRef]
Figure 1. Basic model of the relationship between direct learning and autonomous learning of humans.
Figure 1. Basic model of the relationship between direct learning and autonomous learning of humans.
Sustainability 08 00797 g001
Figure 2. Extension of nonconfirmed information and hypothetical knowledge from proven knowledge and confirmed information.
Figure 2. Extension of nonconfirmed information and hypothetical knowledge from proven knowledge and confirmed information.
Sustainability 08 00797 g002
Figure 3. Relationship between proven knowledge and hypothetical knowledge.
Figure 3. Relationship between proven knowledge and hypothetical knowledge.
Sustainability 08 00797 g003
Figure 4. Learning Model in an open innovation-centered paradigm.
Figure 4. Learning Model in an open innovation-centered paradigm.
Sustainability 08 00797 g004
Figure 5. Learning model in closed innovation-centered paradigm.
Figure 5. Learning model in closed innovation-centered paradigm.
Sustainability 08 00797 g005
Figure 6. Knowledge growth in IMBDAL.
Figure 6. Knowledge growth in IMBDAL.
Sustainability 08 00797 g006
Figure 7. Causal relationship model of autonomous learning and direct learning.
Figure 7. Causal relationship model of autonomous learning and direct learning.
Sustainability 08 00797 g007
Figure 8. Simulation result in DL and AL when the propensities are constants.
Figure 8. Simulation result in DL and AL when the propensities are constants.
Sustainability 08 00797 g008
Figure 9. Simulation result in DL and AL when using the behavior inertia.
Figure 9. Simulation result in DL and AL when using the behavior inertia.
Sustainability 08 00797 g009
Figure 10. Simulation result in DLP and ALP when using the behavior inertia.
Figure 10. Simulation result in DLP and ALP when using the behavior inertia.
Sustainability 08 00797 g010
Figure 11. Simulation result in DL and AL Effectiveness.
Figure 11. Simulation result in DL and AL Effectiveness.
Sustainability 08 00797 g011
Figure 12. Autonomous learning algorithm of a cleaning robot.
Figure 12. Autonomous learning algorithm of a cleaning robot.
Sustainability 08 00797 g012
Figure 13. Autonomous learning algorithm of intelligent navigation.
Figure 13. Autonomous learning algorithm of intelligent navigation.
Sustainability 08 00797 g013
Figure 14. Autonomous learning algorithm of an intelligent Web search engine.
Figure 14. Autonomous learning algorithm of an intelligent Web search engine.
Sustainability 08 00797 g014

Share and Cite

MDPI and ACS Style

Yun, J.J.; Lee, D.; Ahn, H.; Park, K.; Yigitcanlar, T. Not Deep Learning but Autonomous Learning of Open Innovation for Sustainable Artificial Intelligence. Sustainability 2016, 8, 797. https://0-doi-org.brum.beds.ac.uk/10.3390/su8080797

AMA Style

Yun JJ, Lee D, Ahn H, Park K, Yigitcanlar T. Not Deep Learning but Autonomous Learning of Open Innovation for Sustainable Artificial Intelligence. Sustainability. 2016; 8(8):797. https://0-doi-org.brum.beds.ac.uk/10.3390/su8080797

Chicago/Turabian Style

Yun, JinHyo Joseph, Dooseok Lee, Heungju Ahn, Kyungbae Park, and Tan Yigitcanlar. 2016. "Not Deep Learning but Autonomous Learning of Open Innovation for Sustainable Artificial Intelligence" Sustainability 8, no. 8: 797. https://0-doi-org.brum.beds.ac.uk/10.3390/su8080797

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop