Next Article in Journal
Printed Paper Waste as an Alternative Growing Medium Component to Produce Brassica Seedlings under Nursery Conditions
Previous Article in Journal
Analysis of Validation and Simplification of Timber-Frame Structure Design Stage with PU-Foam Insulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Implementation of a Legal AI Bot for Sustainable Development in Legal Advisory Institutions

1
Department of Information Management, National Taiwan University of Science and Technology, Taipei 10607, Taiwan
2
College of Management, National Taipei University of Business, Taipei 10051, Taiwan
*
Authors to whom correspondence should be addressed.
Sustainability 2020, 12(15), 5991; https://0-doi-org.brum.beds.ac.uk/10.3390/su12155991
Submission received: 30 June 2020 / Revised: 20 July 2020 / Accepted: 22 July 2020 / Published: 25 July 2020

Abstract

:
This study explores the implementation of legal artificial intelligence (AI) robot issues for sustainable development related to legal advisory institutions. While a legal advisory AI Bot using the unique arithmetic method of AI offers rules of convenient legal definitions, it has not been established whether users are ready to use one at legal advisory institutions. This study applies the MCDM (multicriteria decision-making) model DEMATEL (decision-making trial and evaluation laboratory)-based Analytical Network Process (ANP) with a modified VIKOR, to explore user behavior on the implementation of a legal AI bot. We first apply DEMATEL-based ANP, called influence weightings of DANP (DEMATEL-based ANP), to set up the complex adoption strategies via systematics and then to employ an M-VIKOR method to determine how to reduce any performance gaps between the ideal values and the existing situation. Lastly, we conduct an empirical case to show the efficacy and usefulness of this recommended integrated MCDM model. The findings are useful for identifying the priorities to be considered in the implementation of a legal AI bot and the issues related to enhancing its implementation process. Moreover, this research offers an understanding of users’ behaviors and their actual needs regarding a legal AI bot at legal advisory institutions. This research obtains the following results: (1) It effectively assembles a decision network of technical improvements and applications of a legal AI bot at legal advisory institutions and explains the feedbacks and interdependences of aspects/factors in real-life issues. (2) It describes how to vary effective results from the current alternative performances and situations into ideal values in order to fit the existing environments at legal advisory institutions with legal AI bot implementation.

1. Introduction

Artificial intelligence (AI)-based legal bots have attracted extensive consideration and have appeared as one of the most promising innovations of technology. Robotics, the replacement of human labor, is becoming a crucial issue, as AI basically functions as an intelligence robot. The development of AI as a root for resolutions to various questions in life, including law, is getting more important. However, experts or human workers are still needed to apply those legal expert systems. Hence, in 2017 an online AI platform, DoNotPay, which provides free legal advice, was released in the U.S. It is called by Joshua Browder, its creator, the “first legal robot”, and it could deal with up to 1000 kinds of civil events [1]. The legal AI bot can also help institutions that offer legal advice services, such as advice bureaus and community legal service centers for sustainable development.
Institutions often have a lot of part-time interns and volunteers offering legal assistance and advice, sometimes relatively early in their legal careers. Nevertheless, they are being requested to offer legal advice on a very wide range of legal problems, often with huge customer case-loads and occasionally with various cases of heterogeneous issues (such as immigration law or consumer law). They usually have limited finance resources to be capable of engaging outside legal advisors or just to employ more personnel. Therefore, if there are legal AI bots, they can solve many issues and can save on human resources and related costs for sustainable development.
A significant issue in this area is realizing what aspects/factors contribute to users’ intention to apply for legal AI bot services for sustainable development. Current research studies have presented an interest in exploring the intention stage (want and plan to use)/adoption stage (will to use) of a legal AI bot. Nevertheless, most legal AI bot studies have used various kinds of methods and frameworks, making it challenging to associate the consequences of diverse studies and to develop a concrete user behavior intention and adoption in the service area. Compared with a physical legal advisory institution, the rapid growth of legal AI bots could bring legal advisory institutions administers great benefit and efficiency. For legal advisory institutions, it is of great importance to know the strategies for legal AI bot implementation and the basis for users to use legal AI bots. Thus, this study aimed to address the following research problems: (1) in implementing legal AI bots, users should give priority to the influence factors that will improve user’s intentions to use legal AI bot; (2) in using legal AI bots, which influence factors will be prioritized by users to decide whether to continue to use the legal AI bot? (3) What are the differences in the influence factors considered by the intention stage and adoption stage?
This purpose of the research is to offer an understanding about the aspects affecting legal AI bots and their implementation at improving legal advisory institutions in order to decrease the gaps in performance among each factor and aspect for sustainable development. The estimation of legal AI bot implementation is a decision analysis issue with multiple attributes, which are often categorized via interdependent factors and may even display similar feedback results. Therefore, one needs to stress that these factors exhibit various associations between lower- or higher-level elements. In addition, most common strategic models cannot take the interrelationships and dependences among dissimilar levels of factors into consideration.
Our research looks to determine the scope to which a variety of factors can affect the results in the definite factors of legal AI bot implementation. The research herein is distinguishable partly owing to its applications of various inner sources for dependent and independent information. Hence, the research objective is to set up an integrated MCDM (multicriteria decision-making) model so as to determine ways to resolve problems in legal AI bot applications.
Conventional multiple attribute decision analysis models cannot deal with the complex relationships among dissimilar factors’ hierarchical stages. Nevertheless, decision makers involved in the implementation of a legal AI bot need such a model to aid their decisions. The objective of the existing study is to solve such a problem. We use an integrated MCDM model that aggregates together decision-making trial and evaluation laboratory (DEMATEL), DANP (DEMATEL-based ANP), and a modified VIKOR (M-VIKOR). Its purpose is to explore a legal advisory AI bot and to establish and enhance implementation strategies. This hybrid MCDM method can deal with the limitations of current assessment models and can assist in investigating how best to apply legal AI bots to enhance service performance for sustainable development. In this study, we investigate the interdependence of user behavior and legal AI bots and consider alternative behaviors to achieve values associated with enhanced performance.
This study makes three contributions. First, it considers four significant perspectives which those in legal advisory institutions must take into account before implementing a legal AI bot: attitudes toward legal AI bots, trust-related behavior, perceived behavioral control, and resistance to innovation for sustainable development. Second, this research demonstrates trust-related behavior; that is, it determines perceptions of external and internal limitations on user behavior and uses the relative importance of these to implement a legal AI bot. For users to be assured that a legal AI bot can be applied for legal advisory, legal advisory institutions must offer them training with the tools required for fundamental applications and functions of a legal AI bot. Third and finally, the results of this research indicate to what extent practicality and ease of use will affect users’ attitude toward ongoing applications of legal AI bots. If users are to accept a legal AI bot, then they need an environment in which they perceive that the methods are easy and useful to apply. A better understanding of how to implement a legal AI bot will assist administrators in adopting suitable schemes for creating such an environment for sustainable development.
This study has five sections. Section 1 (given above) introduces the research. Section 2 reviews the existing literature with regard to aspects of service and legal AI Bots and how to structure a model for their implementation so that our conceptual model can be developed. Section 3 defines this integrated MCDM model. Section 4 offers a case study of implementation and investigates and discusses the outcomes. Section 5 concludes.

2. Literature Review

Informatics in the legal field is growing, bringing together law and AIs at its most important parts. We thus conduct an interdisciplinary investigation in the areas of intelligent technology, law, logic, informatics, and so on [2]. This fast development of AI will influence the marketplace for legal services, the structural transformation in the legal profession, and the reorganization of resources for legal bots [3,4]. AI will improve the agility of legal services and will upgrade the standard of legal services by attaining broader justice of the judiciary and by removing the asymmetry of legal service resources in the future [5,6]. However, legal AI bots are not actual specialists, and human lawyers need to observe whenever necessary [4,7]. How to change human thinking and procedures is a significant issue in this area of AI, and it is also a mission that law people need to confront [8,9].
Legal AI bots could deal with the question of disproportion in legal service resources [4,10]. In the 1970s, researchers started to study the combination of law and AI by investigating the application of robot judgements to replace human judgements by removing legal vagueness [4,11,12]. However, this primary legal AI is to serve and assist bots or judges in dealing with events and not to replace them [4,10,13].
Academic studies on the combination of AI and law are up until now in the developing stage and are insufficient at thinking about and at realizing the applications of AI into different areas, much less investigating the pertinence of legal AI knowledge from the perception of customers or users. Thus, based on the technology acceptance model (TAM), trust, and innovation issues, our research investigates the crucial factors of the acceptance of society and how AI robots got into the legal field and interviews clients, lawyers, experts, and judges. The outcomes of the research will contribute to the combination of AI and law and to practical applications, thus filling in the research gaps.
TAM considers real users’ behavior to understand novel technology based on their intention to apply it. Two main factors, i.e., perceived ease of use and perceived usefulness, influence intention, and adoption [14,15,16]. Numerous researchers have introduced user trust as a main factor in the investigation of TAM [17,18]. They found that user trust exerts an important influence on perceived usefulness [19]. On this foundation of TAM, investigations argue over the influences of attitude-related behaviors, perceived behavioral control, and user trust on the readiness of users to accept legal AI bots via trust factors. In addition, effective regulations and laws create trust, and a significant issue includes a legal AI bot developing and growing trust: trust in privacy, trust in functions, and trust in design. Though existing legal structures are healthy enough to deal with a few challenges that autonomous and robotic goods and services can offer, they still must develop or adapt in reply to the novel extents of applications, personal choices, and government actions [20].
According to previous research, a legal bot may offer an attractive technology of AI as users will find it an interesting and innovative approach. Various factors might have an impact on users’ behavior and their willingness to use legal AI bots [4]. Xu and Wang [4] studied how individuals’ willingness to be innovative and their perception of the usefulness of a method affect the adoption of a legal AI bot. Other research has applied TAM to investigate how users accept novel ideas and implement a legal AI bot [4,21]. Investigations mostly focused on the acceptance of a legal AI bot using users’ application or intention as the dependent variable. Most previous research has focused on users’ comments regarding how they use a legal AI bot, in terms of their acceptance of the technology (how useful the legal AI bot is and this ease of using a legal AI bot) and their attitudes to or interest in legal AI bots [4,21].
While the provision of tools or approaches to enhance users’ application efforts remain a challenging and significant topic [4,21], there has been little research into how and why users accept legal AI bots [4]. Therefore, this study focuses on user behavior and how to solve various related issues. To do this, we analyze users who use a legal AI bot at legal advisory institutions and their various behaviors: plan-related (attitude-related behavior and perceived behavior control) and trust-related in terms of resistance to innovation. We do this to interpret and predict their attitudes to legal AI bots and their intention to implement them at the legal advisory institutions. This MCDM model is used for an evaluation of users’ behavior. To provide a framework for their behavior, we developed the following evaluation system, which refers to fourteen factors related to four aspects: attitude-related behaviors (ARB); perceived behavioral control (PBC); trust-related behaviors (TRB); and innovation resistance (IR). These features correspond to legal AI bot implementation within each aspect, as shown in Table 1.

3. Developing a Map Based on an Integrated MCDM Model

In the section, we briefly define the proposed integrated MCDM model. One of the critical issues in MCDM is ranking a series of alternatives according to a series of factors. In this field, there exist numerous MCDM approaches that rank the alternatives in dissimilar ways [34]. This model is based on previous practice and is considered a suitable method for exploring a strategy to ensure the implementation of a legal AI bot. The functions that the integrated model offers include selection and ranking as well as performance enhancement. In this study, there are two alternative performances: one is intention stage which means that users want and plan to use a legal AI bot, and another alternative is adoption stage, which means that users will to use legal AI bot. The latter is required to reduce any gaps in achieving the ideal outcomes. Ultimately, the major advantage of our hybrid model is its decision-making function of selection extending to enhancement. Thus, it can help administrators develop the best strategies for alternative selection and enhancement problems.
The hybrid model is divided into three parts after the number of factors/aspects to be included in the framework for the legal AI bot implementation has been confirmed. (1) DEMATEL is applied to set up a structure showing the network of influencing relationships (i.e., INRM, referring to the influential network relationships map) on the factors/aspects within the framework. (2) DANP (DEMATEL-based ANP) is applied to the concepts and procedures of ANP to derive the influential weights of each factor/aspect. (3) The modified VIKOR technique applying influential weights is then used to synthesize these gaps between current and ideal performances. Hence, this hybrid model with the integration of three parts is able to support decision-makers in determining how to decrease the gaps of performance to attain the ideal outcome. The hybrid model is shown in Figure 1.

3.1. DEMATEL for Constructing an Evaluation Framework with INRM

DEMATEL is a method for establishing interdependent relationships among factors in a complex structure. The method applies mathematical theories to compute the degree of direct and indirect effects on each factor/aspect [23,35,36,37]. This method has four phases as follows.

3.1.1. Phase 1: Building Domain Knowledge Based on a Direct-Relation Matrix

When the number of elements ( a ) in an evaluation framework has been confirmed, the standard scale of degree of influence is developed (e.g., ranging from “extremely high effect (4)” to “lack of effect (0)”). This measures the degree of influence between factors or aspects by using normal language. The average of n domain experts uses a standard scale to show this direct degree of influence of the factor/aspect x on each other factor/aspect y in the matrix D = [ d x y ] a × a = [ ( z = 1 n d x y z ) / n ] a × a (in which d x y 0 ; otherwise, d x y = 0 , and n is the number of domain experts). Finally, the mean is used to integrate a primary direct relation matrix called E = [ e x y ] a × a that represents the actual experience among all the domain experts.

3.1.2. Phase 2: Obtaining a Normalized Direct Relation Matrix

A normalized primary direct relation matrix B is achieved via normalizing this primary direct relation matrix E . We use Equations (1) and (2), where the maximum sum of each row and column is 1 and all the diagonal terms of the matrix B are 0:
η = max x , y [ max x y = 1 a | e x y | , max y x = 1 a | e x y | ]
B = E η

3.1.3. Phase 3: Deriving a Matrix of Full-Influential Relations

The matrix of full-influential relations F can be derived by using Equation (3). It can offer assurances of convergent resolutions to this matrix inversion in the same way as capturing a Markov chain matrix. Thus, the matrix of the full-influence relation F can be achieved from these values in the normalized direct-relation matrix B , where I is the identity matrix.
F = B + B 2 + + B h = B   ×   ( I B ) 1 ,   when   lim h   B h = [ 0 ] a × a
The full-influence relation matrix F can be divided into F C (by factors) and F D (by aspects) according to a hierarchical structure in Equations (4) and (5), respectively.
Sustainability 12 05991 i001
F D = [ f 11 f 1 y f 1 m f x 1 f x y f x m f m 1 f m y f m m ] m × m

3.1.4. Phase 4: Establishing an Influential Network Relations Map

By summing the individual columns and rows of the full-influence relations matrix F , we acquire the sum of vectors with all columns and rows, as shown by the following Equations (6) and (7):
p x = [ y = 1 a f x y ] a × 1 ,   x { 1 , 2 , , a }   ( Factor   x   influences   all   other   factors )  
q y = [ x = 1 a f x y ] 1 × a ,   y { 1 , 2 , , a }   ( Factor   is   affected   by   all   other   factors )
When y = x (the sum of column and row aggregations means that any factor x influences all other factors, called p x , and x is affected by all other factors, called q x . The value ( p x + q x ) shows the total influence affects received and given by enabler factor x (i.e., representing the degree of effect that this enabler factor x plays in the entire structure, also called “prominence”). In addition, the value ( p x q x ) states the clear influence of enabler x on this entire method. When ( p x q x ) has a positive value, then x fits the net cause set. When ( p x q x ) has a negative value, then x fits the net effect set. Thus, by mapping the dataset of ( p x + q x , p x q x ), we can get the INRM of aspects and factors.

3.2. The DANP Method for Deriving Influential Weights on Aspects and Factors

DANP is applied to the full-influence relations matrix to derive the weight of interdependent relations among aspects/factors by using the concepts and procedures of ANP [23,35,36]. Thus, the value of the weight represents the ratio of factors/aspects and their degree of influence on the whole model that is simultaneously based on a consideration of given and received degrees of influence in a situation. The DANP method includes three major steps, as follows.

3.2.1. Phase 1: Developing an Unweighted Super-Matrix

Developing this unweighted super-matrix W = ( F C ρ ) can be divided into two steps. The first action normalizes the full-influence relations matrix F C (i.e., factor value) to obtain the normalized full-influence relations matrix F C ρ . The second action transposes the normalized full-influence relations matrix F C ρ to obtain W = ( F C ρ ) .
The normalized full-influence relations matrix F C ρ (Equation (9)) is obtained by normalizing each row of aspects in the full-relation matrix F C (Equation (8)), where the sum of each row equals the number of aspects:
Sustainability 12 05991 i002
where F C ρ 11 , as a normalized example, demonstrates the basic concept of how to normalize actions, as shown in Equations (9) and (10):
d x 11 = y = 1 m 1 f x y 11 , x = 1 , 2 , , m 1
F C ρ 11 = [ f 11 11 / d 1 11 f 1 y 11 / d 1 11 f 1 m 1 11 / d 1 11 f x 1 11 / d i 11 f x y 11 / d x 11 f i m 1 11 / d x 11 f m 1 1 11 / d m 1 11 f m 1 y 11 / d m 1 11 f m 1 m 1 11 / d m 1 11 ]   = [ f 11 ρ 11 f 1 y α 11 f 1 m 1 α 11 f x 1 α 11 f x y α 11 f x m 1 α 11 f m 1 1 α 11 f m 1 y α 11 f m 1 m 1 α 11 ]
Then, the normalized full-influence relations matrix F c ρ is transposed to acquire the super-matrix with unweighted W = ( F c ρ ) , as expressed in Equation (11):
Sustainability 12 05991 i003

3.2.2. Phase 2: Synthesizing a Weighted Super-Matrix

This synthesizing stage of the super-matrix with a weighted W * can also be divided into two steps. The first action normalizes the full-influence relation matrix F D (i.e., aspect level) (Equation (5)) and transposes it to achieve the normalized full-influence relation matrix F D ρ , as shown in Equations (12) and (13). The second action is the normalized full-influence relation matrix F D ρ multiplied by this super-matrix with unweighted W ; it is able to present a super-matrix with weighted W * , as expressed in Equation (14).
d x = y = 1 m f D x y , x = 1 , 2 , , m   and   f D ρ x y = f D x y / d x ,   y = 1 , 2 , , m
F D ρ = [ f D 11 / d 1 f D 1 y / d 1 f D 1 m / d 1 f D x 1 / d x f D x y / d x f D i m / d x f D m 1 / d m f D m y / d m f D m m / d m ] m × m = [ f D ρ 11 f D ρ 1 y f D α 1 m f D ρ x 1 f D ρ x y f D ρ x m f D ρ m 1 f D ρ m y f D ρ m m ] m × m
W ρ = F D ρ × W = [ f D ρ 11 × W 11 f D ρ x 1 × W x 1 f D ρ m 1 × W m 1 f D ρ 1 y × W 1 y f D ρ x y × W x y f D ρ m y × W m j f D ρ 1 m × W 1 m f D ρ x m × W x m f D ρ m m × W m m ]

3.2.3. Phase 3: Agglomerating the Weighted Super-Matrix

We can use the Markov chain process of ANP to agglomerate the super-matrix with weighted W * by means of itself numerous times until this super-matrix has become a stable super-matrix to have a sufficiently large power Θ . Hence, the influential ratio values of factors are obtained by lim Θ ( W ρ ) Θ . Finally, we obtain a set of influential weights on factors ( w 1 , , w j , , w n ) and aspects ( w 1 D , , w j D , , w m D ) .

3.3. M-VIKOR for Evaluating and Improving Alternative Performance

M-VIKOR is an evaluation technique following the conception of compromise in reaching the best possible outcomes in multicriteria situations. It can be applied to assist decision makers in selecting and ranking options as well as for performance enhancement [23,35,36]. We define the “ideal value” in terms of the “worst value” as the standard and change the normal “max-min” to determine the benchmark. However, VIKOR’s negative-ideal and positive-ideal points are determined by the best score and the worst performance score according to “max-min” factors in real-world situations. Because VIKOR cannot show gaps in the enhancement of alternatives, we modified it so that the normal maximum and minimum are the negative ideal points, with the points being an ideal value as well as the worst value for alternative selection and enhancement. Thus, by using the M-VIKOR “ideal-word” for the normalized class distance utility, being near the ideal value and far from the worst value is a good outcome in these real-world situations [23,35,36,38,39,40,41,42].

3.3.1. Phase 1: Determining the Negative/Positive Ideal Point Based on Ideal Values and Worst Values

The normal VIKOR method sets the positive-ideal point u x * = max k { u k x | k = 1 , 2 , , m } and the negative ideal point u x = min k { u k x | k = 1 , 2 , , m } in k alternatives. The positive-ideal point u x * = max k { u k x | k = 1 , 2 , , m } and the negative ideal point u x = min k { u k x | k = 1 , 2 , , m } are set as follows (Equations (15) and (16)):
u x * = { max k { u k x | k = 1 , 2 , , m } ,   for   benefit   attributes m i n k { u k x | k = 1 , 2 , , m } ,   for   cost   attributes } , x = 1 , 2 , , a
u x = { min x { u k x | k = 1 , 2 , , m } ,   for   benefit   attributes m a x x { u k x | k = 1 , 2 , , m } ,   for   cost   attributes } , x = 1 , 2 , , a
In this study, we used questionnaires in which the scored responses range from 0 to 10: totally dissatisfied (0) to extremely satisfied (10). We set the ideal value at 10 (i.e., u y * = 10 as the positive-ideal point) and the worst value at 0 (i.e., u y = 0 as positive-ideal point) in each factor x, respectively. The basic concept differs from the traditional method as follows:
The vector of ideal value (Equation (17)):
u a s p i r e d = ( u 1 a s p i r e d , , u x a s p i r e d , , u a a s p i r e d ) = ( 10 , , 10 , , 10 )
The vector of worst value (Equation (18)):
u w o r s t = ( u 1 w o r s t , , u y w o r s t , , u a w o r s t ) = ( 0 , , 0 , , 0 )

3.3.2. Phase 2: Obtaining the Mean of the Minimal Gap of the Maximal Regret and Group Utility on Each Alternative

The purpose of this phase is to compute the minimal average gap of the group utility S k and the maximal gap for all factors or aspects in order to give the highest priority to the enhancement sequence T k (Equations (19) and (20)):
L k g = 1 = S k = x = 1 a w x r k x = x = 1 a w x ( | u x a s p i r e d u k x | ) / ( | u x a s p i r e d u x w o r s t | )
L k g = = T k = max x { r k x | x = 1 , 2 , , a }
where r k x = ( | u x a s p i r e d u k x | ) / ( | u x a s p i r e d u x w o r s t | ) represents the gap ratio of performance; S k indicates the average gap ratios of the ideal value u x a s p i r e d to the value of performance u k x in factor x of alternative k ; w x indicates the relative influential weight of factor x (or aspect x), where w x is obtained via the DANP method; and T k represents the maximal performance gap in all the factors or aspects for prioritizing enhancement within alternative k . It is possible that the M-VIKOR method can also be used to solve only one alternative in terms of the gap in performance enhancement: closing the gap between zero and the ideal value.

3.3.3. Phase 3: Providing a Comprehensive Indicator of Each Alternative

The comprehensive score of each alternative H k is finally integrated by Equation (21). When the value is combined in the influential network relations map (INRM), we can observe how each alternative is enhanced to decrease the gaps in factors in order to achieve the ideal value:
H k = v S k S a s p i r e d S w o r s t S a s p i r e d + ( 1 v ) T k T a s p i r e d T w o r s t T a s p i r e d
where S a s p i r e d = 0 (i.e., achieving the ideal value of group utility S k ), S w o r s t = 1 (i.e., the worst situation of S k ), T a s p i r e d = 0 (i.e., achieving the ideal value of maximal regret T k ), and T w o r s t = 1 (i.e., the worst situation of T k ). Thus, Equation (21) can be rewritten as Equation (22):
H k = v S k + ( 1 v ) T k
where v is the weight for the decision-making perspective (i.e., v = 1 is only considered in how to minimize the group utility S k ; v = 0 is only considered in how to choose the maximum gap for previous enhancement T k ; and v = 0.5 is considered for both the group utility S k and the maximum gap T k ).

4. Research Methods

In this section, our proposed hybrid MCDM model was applied in a case study on the implementation of a legal AI bot in Taiwan. The case study illustrates how the hybrid MCDM model can be used to assist administrators in understanding and enhancing their own attitude toward this type of legal AI bot and in realizing users’ behavior and attitudes toward it.

4.1. Data Collection

Between April and May, 2020, interviews and questionnaires were applied to collect data from 36 experts (10 AI judges, 12 lawyers, and 14 AI experts) who understand and have an interest in the development of AI, law, or AI robots and who had worked at least 10 years for related work experiences. In order to ensure the smooth progress of data collection, this study firstly applied a matrix filling technique to conduct the pre-investigation and trial filling. The response from filling in the matrix was that it was not easy for experts to compare the name and code of individual factors, such as filling in the matrix. Hence, this study enhances this procedure of fulfilling the study via designing a survey like a Likert scale and clarifying the corresponding instructions and conceptions in detail so the experts can seriously and easily fill in the survey. In the analysis, we use the tool “Microsoft Office Excel 2016” for computations. The significance confidence is 99.05%, and the gap error is only 0.95%, which is less than 1% and greater than 95% consensus. Each survey needed between 40 and 50 minutes to complete.

4.2. Using DEMATEL to Develop INRM

This study used DEMATEL to investigate how to adopt a legal AI bot according to the 14 factors referring to four aspects, as discussed above. From the surveys, we obtained matrix F, giving the total influence for the four aspects and 14 factors. These are shown in Table 2 and Table 3, respectively. We developed the ideas and estimations of the users in the four aspects and found how the extent of the influence is associated with other aspects in Table 2. Based on the degree of total influence ( p x + q x ) , TRB ( A 3 ) has the strongest effect on the strength of the relationship; this was the most significant effect. On the contrary, IR ( A 4 ) has the least influence. Based on the relationship of influence ( p x q x ) , we also determine that IR ( A 4 ) has the strongest direct influence on other aspects and that TRB ( A 3 ) is the worst direct influence.
According to the total effect matrix, we assess how each of the influencing factors are related to individual factors (see Table 3). This illustrates the extent of indirect or direct effects and contrasts them with the other factors in Table 4. PU ( f 1 ) is the most significant factor for consideration; moreover, IB ( f 14 ) has the smallest effect on the other factors. Table 4 also shows that UB ( f 10 ) has the strongest effect on the other factors and that TFC ( f 6 ) is the most strongly affected by other factors.

4.3. Using the DANP Model for Analyzing the Influential Weights

We applied DEMATEL to determine the most influential relationships among the factors and to acquire the most accurate weightings. The objective of DANP is to explain the feedback regarding the interdependence and interrelationships among factors. Hence, we developed this quality estimation model by applying the DEMATEL method according to the concepts of ANP, so that our DANP could determine the weight of influence of each factor (see Table 4 and Table 5).
We also considered whether these important factors in user behavior are compatible with legal AI bot SA ( f 8 ) , PU ( f 1 ) , and TB ( f 9 ) . In addition, the weights of influence are integrated with the DEMATEL method to evaluate the significance of problem-solving according to the gaps recognized by using the M-VIKOR technique and INRM (shown as Figure 2).

4.4. Using M-VIKOR for Assessing the Total Gaps

We used M-VIKOR to enhance legal AI bot services and to estimate the total accreditation gaps in users’ behavior at the intention and adoption stages, as shown in Table 5. Administrators can classify problem-solving topics followed by the integrated index from this aspect of the factors as individual aspects.
Applying these indices to the four aspects and 14 factors, gaps in values can be evaluated by means of the priority sequence enhancement for attaining the ideal values. TB ( f 13 ) with a larger gap (0.750) at the intention stage is the primary factor to be enhanced, followed by IB ( f 14 ) and UB ( f 10 ) . Of all the factors, administrators of legal advisory institutions are the most focused on TB ( f 13 ) (tradition barrier) at the intention step; TB ( f 13 ) with a larger gap (0.625) is the primary factor to be enhanced in the adoption step, followed by IB ( f 14 ) and TFC ( f 6 ) . Supervisors pay the most attention to TB ( f 13 ) (tradition barrier) in the adoption stage. The findings show the enhancement priority sequence required for the overall factors to achieve the ideal value, from the most to the least significant factors.
Priorities for enhancement can also be used for individual aspects. In ARB ( A 1 ) , for example, the sequence of values of the priority gap is complexity ( f 3 ) , PEOU ( f 2 ) , and PU ( f 1 ) . In PBC ( A 2 ) of the intention stage, the sequence of values of the priority gap is TFC ( f 6 ) , SE ( f 4 ) , and RFC ( f 5 ) . In TRB ( A 3 ) of the intention stage, the sequence of the enhancement priorities is SA ( f 8 ) , DOT ( f 7 ) , and TB ( f 9 ) . In IR ( A 4 ) of the intention stage, the sequence of the enhancement priorities is TB ( f 13 ) , IB ( f 14 ) , UB ( f 10 ) , VB ( f 11 ) , and RB ( f 12 ) . In the adoption stage, the sequence of the enhancement priorities is ( f 2 ) , ( f 3 ) , and ( f 1 ) in ARB ( A 1 ) ; ( f 6 ) , ( f 4 ) , and ( f 5 ) in PBC ( A 2 ) ; ( f 9 ) , ( f 7 ) , and ( f 8 ) in TRB ( A 3 ) ; and ( f 13 ) , ( f 14 ) , ( f 10 ) , ( f 11 ) , and ( f 12 ) in IR ( A 4 ) . Applying the values of gaps offered by the sample of users, these enhancement primacy schemes are comprehensive and unique, both in terms of their separate aspects and overall (see Table 5). Administrators will be able to understand users’ behavior in adopting legal AI bots and to recognize the gaps in the stages (of multiple intention and adoption).

4.5. Results and Discussion

From our DEMATEL method, we have identified the interrelationships between factors or aspects by applying IRNM (as shown in Figure 2). As shown in Figure 2, IR ( A 4 ) affects other aspects like PBC ( A 2 ) , ARB ( A 1 ) , and TRB ( A 3 ) . It can be seen that IR ( A 4 ) plays a significant role and has the strongest effect on the other aspects. Hence, administrators need to focus on enhancing this aspect, followed by PBC ( A 2 ) , ARB ( A 1 ) , and TRB ( A 3 ) sequentially, when evaluating the behavior of users and improving their implementation of legal AI bots.
After investigating the aspects, we next identified the factors considered in all aspects. Based on these outcomes, we show IRNM of the factors in Figure 2. When considering the relationships of influence among the factors, in the ARB aspect, it was shown that PEOU ( f 2 ) was the most influential factor and should be the first to be enhanced, followed by PU ( f 1 ) and complexity ( f 3 ) (see Figure 2: the causal relationship A1). In the PBC aspect, RFC ( f 5 ) was the most influential factor and is the most important to be enhanced, followed by SE ( f 4 ) and TFC ( f 6 ) (see Figure 2: causal relationships in A2). In the TRB aspect, TB ( f 9 ) was the most influential factor and is the most important to enhance, followed by DOT ( f 7 ) and SA ( f 8 ) (see Figure 2 causal relationships in A3); in the IR aspect, RB ( f 12 ) was the most influential factor and is the most important to enhance, followed by VB ( f 11 ) , UB ( f 10 ) , TB ( f 13 ) , and IB ( f 14 ) (see Figure 2: causal relationships in A4).
The findings of the aspects and factors offer crucial information for understanding user behavior and what will affect their use of legal AI bots for sustainable development at institutions of legal advisory. Administrators need to consider all the aspects and factors presented in Figure 2. Although the estimation technique can be applied at legal advisory institutions and nonacademic real-life situations, administrators of the former will need to bear in mind that some modifications of the model will need to be applied at individual institutions. Because the value of significance for the 14 factors can change according to the particular situation and user behavior, supervisors will need to consider the typical behavior of their users before determining the ideal implementation technique.
The most significant factor identified by means of DANP when estimating a legal AI bot and that affects users’ decisions was TRB ( A 3 ) weighted at 0.277 in the aspects of SA ( f 8 ) and PU ( f 1 ) and weighted at 0.096 and 0.094 in the factors (see Table 5). Trust is a significant element that determines the key to any connection. Trust is formed when a user believes in the integrity and reliability of the other party. Trust is a main element in a reciprocal connection [43]. Structural assurance represents a belief in the guarantees/promises, encryptions, regulations, and other processes of a new legal AI bot, the expectations caused via the user in uncertain surroundings, and their effects on significant events. This study found that the interviewees gave a certain particular psychological response to legal AI bots and formed a one-way emotional bond: trust [44]. Hence, the trust of users in legal AI bots forms the structural assurance of robots in legal services. On the other hand, legal AI bots can help improve personnel performance, can enhance operational efficiency, and can reduce costs. These experts contended that a legal AI bot is helpful and efficient. They agreed that resolving the legal issues of users quickly, conveniently, and at a lower cost establishes the fundamental facet of perceived usefulness, which is an important influence issue [4,16]. “Structural assurance” and “perceived usefulness” are therefore the most significant factors when evaluating legal AI bot implementation.
The overall gap values in Table 5 that show room for enhancement are 0.450 in the intention stage and 0.301 in the adoption stage. From the stages, IR ( A 4 ) featured the largest gap (0.608) in the intention stage while IR ( A 4 ) featured the largest gap value (0.419) in the adoption stage; clearly, it needs to be a priority for enhancement if administrators wish to attain an ideal value. In terms of long-term improvement, administrators should carefully consider their intentions regarding introducing legal AI bots for the reasons given above. Assessing legal AI bots according to users’ behavior by means of a multiple-stage pattern, as offered by this approach, can be introduced to legal advisory institutions. However, managers need to be cautious about using this pattern because the significance of these 14 factors may vary according to the situation. Supervisors need to associate the legal AI bot with users’ behavior and to describe this difference before judging whether this would be the ideal service to offer.

5. Conclusions

Legal AI bots have a significant role to play at legal advisory institutions, but the strategies for their use are complex and there is not always overall clarity on how they should be implemented for sustainable development. Different situations may require different conditions for their use. Based on previous research and the opinion of experts, we established four aspects with 14 factors that align with legal AI bot implementation according to user behavior. We used an integrated MCDM model, DDANPV, which is very powerful technique, and a combination of DEMATEL, DANP, and M-VIKOR in a case study. The key motivations among these various methods are available for conflict resolution. When various criteria are to be considered, integrated MCDM is one of the most widespread methods. M-VIKOR is an MCDM technique that is based on assessing established criteria and on reaching a compromise for generating the best solution. VIKOR ranks the criteria to establish the solution that is closest to the ideal for sustainable development.
In our decision-making procedure, we applied weightings to local and global alternatives to allow the leaders of legal advisory institutions to choose the features that would best assist them at implementing legal AI bots for sustainable development. We have not only chosen the best elements but also have established how to narrow gaps to attain the ideal values for legal AI bots. It is argued that the methodology used in this research is capable of handling intricate problems related to the sustainable development of legal AI bots. This study not only has deep significance for related specialists but also offers an adequate and feasible approach to the sustainable development of legal AI bots under an approach that offers management support when targeting enhancement of legal AI bot usage.
The limitations of this paper offer direction for future research. The primary data were obtained from a limited number of users in the field of legal AI bots. Though adjusted, there were some dissimilar estimations in the assembly of the data of the primary matrix of influence due to variances in specialized viewpoints. Further research will be able to expand the channels and scope to obtain more extensive primary data, which will increase the accuracy of the final results. Following further developments in the implementation of legal AI bots, it will be necessary to undertake more studies in the field. Research should investigate the core reasons for variances in order to fully understand the interrelationships among a wide range of factors.

Author Contributions

Methodology, M.-T.L.; Formal analysis, M.-T.L.; Investigation, J.-H.H.; Writing—original draft, M.-L.T. and J.-H.H.; Writing—review & editing, G.-G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was not funded.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boynton, S. DoNotPay, ‘World’s First Robot Lawyer,’ Coming to Vancouver to Help Fight Parking Tickets. Global News. 1 November 2017. Available online: https://globalnews.ca/news/3838307/donotpay-robot-lawyer-vancouverparking-tickets (accessed on 22 June 2020).
  2. Aguilo-Regla, J. Introduction: Legal Informatics and the Conceptions of the Law. In Law and the Semantic Web; Benjamins, R.P., Casanovas, J., Gangemi, A., Eds.; Springer: Berlin, Germany, 2005; pp. 18–24. [Google Scholar]
  3. Bench-Capon, T.; Araszkiewicz, M.; Ashley, K.; Atkinson, K.; Bex, F.; Borges, F.; Wyner, A.Z. A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law. Artif. Intell. Law 2012, 20, 215–319. [Google Scholar] [CrossRef] [Green Version]
  4. Xu, N.; Wang, K.J. Adopting robot lawyer? The extending artificial intelligence robot lawyer technology acceptance model for legal industry by an exploratory study. J. Manag. Organ. 2019, 13, 1–19. [Google Scholar] [CrossRef]
  5. Hilt, K. What does the future hold for the law librarian in the advent of artificial intelligence? Can. J. Inf. Lib. Sci. 2017, 41, 211–227. [Google Scholar]
  6. Adamski, D. Lost on the digital platform: Europe’s legal travails with the digital single market. Common Mkt. Law Rev. 2018, 55, 719–751. [Google Scholar]
  7. Goodman, J. Meet the AI Robot Lawyers and Virtual Assistants. 2016. Available online: https://www.lexisnexis-es.co.uk/assets/files/legal-innovation.pdf (accessed on 22 June 2020).
  8. Papakonstantinou, V.; De Hert, P. Structuring modern life running on software. Recognizing (some) computer programs as new “digital persons”. Comput. Law Secur. Rev. 2018, 34, 732–738. [Google Scholar] [CrossRef]
  9. Alarie, B.; Niblett, A.; Yoon, A.H. How artificial intelligence will affect the practice of law. Univ. Toronto Law J. 2018, 68, 106–124. [Google Scholar] [CrossRef]
  10. Castell, S. The future decisions of RoboJudge HHJ Arthur Ian Blockchain: Dread, delight or derision? Comput. Law Secur. Rev. 2018, 34, 739–753. [Google Scholar] [CrossRef]
  11. D’Amato, A. Can/should computers replace judges. Georgia Law Rev. 1976, 11, 1277. [Google Scholar]
  12. Von der Lieth Gardner, A. An Artificial Intelligence Approach to Legal Reasoning; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
  13. McGinnis, J.O.; Pearce, R.G. The great disruption: How machine intelligence will transform the role of lawyers in the delivery of legal services. Fordham Law Rev. 2014, 82, 3041–3066. [Google Scholar] [CrossRef] [Green Version]
  14. Almaiah, M.A. Acceptance and usage of a mobile information system services in University of Jordan. Educ. Inf. Technol. 2018, 23, 1873–1895. [Google Scholar] [CrossRef]
  15. Roca, J.C.; Chiu, C.M.; Martinez, F.J. Understanding e-learning continuance intention: An extension of the Technology Acceptance Model. Int. J. Hum. Comput. Stud. 2006, 64, 683–696. [Google Scholar] [CrossRef] [Green Version]
  16. Sarkar, S.; Chauhan, S.; Khare, A. A meta-analysis of antecedents and consequences of trust in mobile commerce. Int. J. Inf. Manag. 2020, 50, 286–301. [Google Scholar] [CrossRef]
  17. Kim, J.B. An empirical study on consumer first purchase intention in online shopping: Integrating initial trust and TAM. Electron. Commer. Res. 2012, 12, 125–150. [Google Scholar] [CrossRef]
  18. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  19. Abroud, A.; Choong, Y.V.; Muthaiyah, S.; Fie, D.Y.G. Adopting e-finance: Decomposing the technology acceptance model for investors. Serv. Bus. 2015, 9, 161–182. [Google Scholar] [CrossRef]
  20. Holder, C.; Khurana, V.; Harrison, F.; Jacobs, L. Robotics and law: Key legal and regulatory implications of the robotics age (Part I of II). Comput. Law Secur. Rev. 2016, 32, 383–402. [Google Scholar] [CrossRef]
  21. Greenleaf, G.; Mowbray, A.; Chung, P. Building sustainable free legal advisory systems: Experiences from the history of AI & law. Comput. Law Secur. Rev. 2018, 34, 314–326. [Google Scholar]
  22. Rogers, E.M. The Diffusion of Innovations, 4th ed.; Free Press: New York, NY, USA, 1995. [Google Scholar]
  23. Lu, M.T.; Tzeng, G.H.; Cheng, H.; Hsu, C.C. Exploring mobile banking services for user behavior in intention adoption: Using new hybrid MADM model. Serv. Bus. 2015, 9, 541–565. [Google Scholar] [CrossRef]
  24. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef] [Green Version]
  25. Moore, G.C.; Benbasat, I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inf. Syst. Res. 1991, 2, 192–222. [Google Scholar] [CrossRef]
  26. Bandura, A. Social Foundations of Thought and Action: A Social Cognitive Theory; Prentice-Hall: Englewood Cliffs, NJ, USA, 1986. [Google Scholar]
  27. Taylor, S.; Todd, P. Decomposition and crossover effects in the theory of planned behavior: A study of consumer adoption intentions. Int. J. Res. Mark. 1995, 12, 137–155. [Google Scholar] [CrossRef]
  28. McKnight, D.H.; Choudhury, V.; Kacmar, C. Developing and validating trust measures for e-commerce: An integrative typology. Inf. Syst. Res. 2002, 13, 344–359. [Google Scholar] [CrossRef] [Green Version]
  29. McKnight, D.H.; Chervany, N.L. What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology. Int. J. Electron. Commun. 2001, 6, 35–59. [Google Scholar] [CrossRef]
  30. Ma, L.; Lee, C.S. Understanding the barriers to the use of MOOCs in a developing country: An innovation resistance perspective. J. Educ. Comput. Res. 2019, 57, 571–590. [Google Scholar] [CrossRef]
  31. Fain, D.; Roberts, M.L. Technology vs. consumer behavior: The battle for the financial services customer. J. Direct Mark. 1997, 11, 44–54. [Google Scholar] [CrossRef]
  32. Kuisma, T.; Laukkanen, T.; Hiltunen, M. Mapping the reasons for resistance to internet banking: A means-end approach. Int. J. Inf. Manag. 2007, 27, 75–85. [Google Scholar] [CrossRef]
  33. Laukkanen, T.; Lauronen, J. Consumer value creation in mobile banking services. Int. J. Mobile Commun. 2005, 3, 325–338. [Google Scholar] [CrossRef]
  34. Mohammadi, M.; Rezaeia, J. Ensemble ranking: Aggregation of rankings produced by different multi-criteria decision-making methods. Omega 2020, 96, 102254. [Google Scholar] [CrossRef]
  35. Lu, M.T.; Hsu, C.C.; Liou, J.J.H.; Lo, H.W. A hybrid MCDM and sustainability-balanced scorecard model to establish sustainable performance evaluation for international airports. J. Air Transp. Manag. 2018, 71, 9–19. [Google Scholar] [CrossRef]
  36. Lu, M.T.; Lin, S.W.; Tzeng, G.H. Improving RFID adoption in Taiwan’s healthcare industry based on a DEMATEL technique with a hybrid MCDM model. Decis. Support Syst. 2013, 56, 259–269. [Google Scholar] [CrossRef]
  37. Feng, G.C.; Ma, R. Identification of the factors that influence service innovation in manufacturing enterprises by using the fuzzy DEMATEL method. J. Clean. Prod. 2020, 253, 120002. [Google Scholar] [CrossRef]
  38. Opricovic, S.; Tzeng, G.H. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 2004, 156, 445–455. [Google Scholar] [CrossRef]
  39. Opricovic, S.; Tzeng, G.H. Extended VIKOR method in comparison with outranking methods. Eur. J. Oper. Res. 2007, 178, 514–529. [Google Scholar] [CrossRef]
  40. Acuña-Soto, C.M.; Liern, V.; Pérez-Gladish, B. A VIKOR-based approach for the ranking of mathematical instructional videos. Manag. Decis. 2019, 57, 501–522. [Google Scholar] [CrossRef]
  41. Kumar, A.; Aswin, A.; Gupta, H. Evaluating green performance of the airports using hybrid BWM and VIKOR methodology. Tour. Manag. 2020, 76, 103941. [Google Scholar] [CrossRef]
  42. Garg, C.P.; Sharma, A. Sustainable outsourcing partner selection and evaluation using an integrated BWM–VIKOR framework. Environ. Dev. Sustain. 2020, 22, 1529–1557. [Google Scholar] [CrossRef]
  43. Nora, L. Trust, commitment, and customer knowledge: Clarifying relational commitments and linking them to repurchasing intentions. Manag. Decis. 2019, 57, 3134–3158. [Google Scholar] [CrossRef]
  44. Arbib, M.A.; Fellous, J.M. Emotions: From brain to robot. Trends Cogn. Sci. 2004, 8, 554–561. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Modeling procedures of our proposed hybrid multicriteria decision-making (MCDM) model.
Figure 1. Modeling procedures of our proposed hybrid multicriteria decision-making (MCDM) model.
Sustainability 12 05991 g001
Figure 2. The influential network relationships map (INRM) per aspect and factor.
Figure 2. The influential network relationships map (INRM) per aspect and factor.
Sustainability 12 05991 g002
Table 1. Explanation of aspects and factors.
Table 1. Explanation of aspects and factors.
Aspect/FactorsDescriptionSource
A1Attitude-related behaviors (ARB)
f1Perceived usefulness (PU)The degree to which a person believes that using a particular legal AI bot would enhance his or her performance. [16,22,23]
f2Perceived ease of use (PEOU)The degree to which a user believes that applying a legal AI bot is clear and understandable for the average person.[16,22,23]
f3ComplexityUsers’ perceptions about how difficult the legal AI bot will be to use, operate, or understand. The easier it is to use and realize, the more likely it is to be implemented. Therefore, complexity is expected to be negatively related to attitude.[22,23,24,25]
A2Perceived behavioral control (PBC)
f4Self-efficacy (SE)Specific decisions that individuals make about their ability to do something. With reference to legal advisory legal AI bots, self-efficacy refers to users’ assessment of the ability to achieve services and legal information through them.[23,26]
f5Resource facilitating conditions (RFC)Resources, such as time or precedents, related to resource compatibility and matters that may constrain usage. [23,27]
f6Technology facilitating conditions (TFC)Technology, such as software and hardware, related to technology compatibility and issues that may constrain practice.[23,27]
A3Trust-related behaviors (TRB)
f7Disposition to trust (DTC)A person’s general tendency to trust others; it could be considered a personality trait.[28,29]
f8Structural assurance (SA)The perception that the necessary legal and technical structures are in place: guarantees/promises, encryption, regulations, and other processes.[16,23]
f9Trust belief (TB)The belief in the trustworthiness of the legal AI bot, consisting of a set of particular beliefs about competence and integrity. [18,23,28,29]
A4Innovation resistances (IR)
f10Usage barrier (UB)For using the legal AI bot, the usage barriers include users’ perceptions on what is required for legal advice, e.g., clarity.[30,31,32,33]
f11Value barrier (VB)The perception of some users that a legal AI bot has few advantages: such as if the advisory legal AI bot connection generates more time than benefits.[30,31]
f12Risk barrier (RB)Users’ perception rather than a characteristic of the robots. Hence, at legal advisory institutions for a legal AI bot, it is not always a problem of actual risks but has to do more with users’ perception that, for a number of reasons, the service entails risks.[30,31]
f13Tradition barrier (TB)The impact of the innovation on routines. If these routines are significant to a user, resistance will be high. The image barrier is related to the origin of an innovation, such as advisory class. [30,31]
f14Image barrier (IB)The negative “danger to use” perception to AI in general and to robots in particular. Users who already perceive that technology is too difficult to apply may instantly form a negative image of these service associated with the robots.[30,31]
Table 2. The sum of effects on aspects and total effect matrix of F D .
Table 2. The sum of effects on aspects and total effect matrix of F D .
Aspects A 1 A 2 A 3 A 4 p x q x p x + q x p x q x
ARB A 1 0.4580.4310.4920.3731.7551.7963.551−0.042
PBC A 2 0.4350.3760.4510.3381.6001.6203.220−0.020
TRB A 3 0.4730.4280.4620.3501.7131.8413.553−0.128
IR A 4 0.4310.3850.4360.3361.5881.3982.9870.190
Table 3. The total effect matrix of F C for factors.
Table 3. The total effect matrix of F C for factors.
Factors f 1 f 2 f 3 f 4 f 5 f 6 f 7 f 8 f 9 f 10 f 11 f 12 f 13 f 14
f 1 0.4420.4960.4860.4480.4570.4540.4950.5300.5010.4180.3990.3860.3660.373
f 2 0.4990.3990.4660.4260.4270.4360.4690.5100.4770.4030.3830.3720.3480.355
f 3 0.4890.4560.3860.4140.4080.4140.4710.5000.4760.3920.3740.3540.3350.345
f 4 0.4490.4220.4180.3270.3810.3870.4320.4570.4280.3580.3520.3400.3100.322
f 5 0.4860.4540.4440.4110.3600.4370.4750.4970.4760.3790.3660.3540.3390.353
f 6 0.4300.4110.3970.3700.3910.3210.4180.4520.4220.3460.3390.3110.3040.305
f 7 0.4800.4590.4610.4170.4320.4240.3920.5020.4770.3690.3560.3450.3250.332
f 8 0.4960.4730.4660.4280.4410.4360.4830.4270.4860.3780.3610.3480.3190.336
f 9 0.4930.4630.4620.4170.4350.4230.4800.5050.4030.3870.3740.3500.3360.335
f 10 0.4810.4590.4650.4290.4190.4100.4480.4800.4550.3330.3730.3760.3610.351
f 11 0.4930.4690.4650.4200.4160.4200.4580.4970.4740.3940.3230.3810.3530.355
f 12 0.4990.4740.4710.4230.4310.4220.4880.5190.4950.4200.3960.3200.3620.359
f 13 0.3870.3590.3590.3330.3330.3320.3590.3920.3840.3410.3170.3170.2360.294
f 14 0.3840.3540.3540.3240.3290.3280.3520.3860.3580.3210.3100.2960.2810.233
Note: z = 36 denotes the number of users, f i j p is the average influence of x factor on y, and a denotes the number of factors; here, a = 14 and a × a is a matrix. 1 a 2 x = 1 a y = 1 a | f i j z f i j z 1 | f i j p × 100 % = 0.95 % < 5%; the significant confidence is 99.05%.
Table 4. The weights, the sum of effects, and ranking per factor.
Table 4. The weights, the sum of effects, and ranking per factor.
Aspects/Factors p x q x p i + q i p x q x Influential Weights
(Global Weights)
ARBA1 0.270
PUf11.4231.4302.853−0.0060.094
PEOUf21.3651.3512.7160.0140.089
Complexityf31.3311.3382.669−0.0070.088
PBCA2 0.244
SEf41.0951.1082.203−0.0120.080
RFCf51.2071.1322.3390.0750.082
TFCf61.0821.1452.227−0.0630.081
TRBA3 0.277
DTf71.3711.3552.7260.0170.090
SAf81.3951.4342.829−0.0380.096
TBf91.3881.3672.7550.0220.091
IRA4 0.210
UBf101.809 3.603 -0.016 1.809 0.045
VBf111.719 3.526 0.089 1.719 0.043
RBf121.690 3.547 0.166 1.690 0.042
TBf131.593 3.099 -0.088 1.593 0.039
IBf141.593 3.034 -0.151 1.593 0.040
Table 5. The evaluation of legal artificial intelligence (AI) bot for multiple stages by M-VIKOR.
Table 5. The evaluation of legal artificial intelligence (AI) bot for multiple stages by M-VIKOR.
Aspects/FactorsLocal WeightGlobal Weight
(DANP)
Legal   AI   Bot   Gap   ( h k j )
Intention (H1)Adoption (H2)
ARBA10.270 0.3140.206
PUf10.347 0.0940.1750.100
PEOUf20.328 0.0890.3750.275
Complexityf30.325 0.0880.400 0.250
PBCA20.244 0.5080.410
SEf40.3300.0800.500 0.379
RFCf50.3360.0820.400 0.375
TFC f60.3340.0810.625 0.475
TRBA30.277 0.4130.209
DOTf70.325 0.0900.413 0.225
SA f80.347 0.0960.425 0.175
TBf90.329 0.0910.400 0.229
IRA40.210 0.6080.419
UBf100.215 0.0450.700 0.350
VBf110.207 0.0430.500 0.325
RBf120.199 0.0420.375 0.225
TBf130.188 0.0390.750 0.625
IBf140.191 0.0400.725 0.600
SA Total gaps0.4500.301

Share and Cite

MDPI and ACS Style

Ho, J.-H.; Lee, G.-G.; Lu, M.-T. Exploring the Implementation of a Legal AI Bot for Sustainable Development in Legal Advisory Institutions. Sustainability 2020, 12, 5991. https://0-doi-org.brum.beds.ac.uk/10.3390/su12155991

AMA Style

Ho J-H, Lee G-G, Lu M-T. Exploring the Implementation of a Legal AI Bot for Sustainable Development in Legal Advisory Institutions. Sustainability. 2020; 12(15):5991. https://0-doi-org.brum.beds.ac.uk/10.3390/su12155991

Chicago/Turabian Style

Ho, Juin-Hao, Gwo-Guang Lee, and Ming-Tsang Lu. 2020. "Exploring the Implementation of a Legal AI Bot for Sustainable Development in Legal Advisory Institutions" Sustainability 12, no. 15: 5991. https://0-doi-org.brum.beds.ac.uk/10.3390/su12155991

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop