Next Article in Journal
Different Pressures, Low Temperature, and Short-Duration Supercritical Carbon Dioxide Treatments: Microbiological, Physicochemical, Microstructural, and Sensorial Attributes of Chill-Stored Chicken Meat
Next Article in Special Issue
Factors Influencing Cloud Computing Adoption in Higher Education Institutions of Least Developed Countries: Evidence from Republic of Yemen
Previous Article in Journal
BassNet: A Variational Gated Autoencoder for Conditional Generation of Bass Guitar Tracks with Learned Interactive Control
Previous Article in Special Issue
A Systematic Mapping Study in Cloud for Educational Innovation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Continuance Use of Cloud Computing in Higher Education Institutions: A Conceptual Model

Department of Software Engineering and Information System, Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Serdang 43400, Malaysia
*
Authors to whom correspondence should be addressed.
Submission received: 29 July 2020 / Revised: 26 August 2020 / Accepted: 1 September 2020 / Published: 23 September 2020
(This article belongs to the Special Issue Innovations in the Field of Cloud Computing and Education)

Abstract

:
Resource optimization is a key concern for Higher Education Institutions (HEIs). Cloud Computing, as the recent generation in computing technology of the fourth industrial revolution, has emerged as the main standard of service and resource delivery. As cloud computing has grown into a mature technology and is being rapidly adopted in many HEIs across the world, retaining customers of this innovative technology has become a challenge to the cloud service providers. Current research trends on cloud computing have sought to study the acceptance or adoption of technology; however, little research has been devoted to the continuance use in an organizational setting. To address this gap, this study aims to investigate the antecedents of cloud computing continuance use in HEIs. Hence, drawing on the prior literature in organizational-level continuance, this research established a conceptual model that extends and contextualizes the IS continuance model through the lens of the TOE framework (i.e., technological, organizational, and environmental influences). The results of a pilot study, conducted through a survey with information and communications technology (ICT) decision makers, and based on the proposed conceptual model, indicate that the instrument is both reliable and valid, and so point the way towards further research. The paper closes with a discussion of the research limitations, contribution, and future directions.

Graphical Abstract

1. Introduction

Cloud computing (CC) is increasingly becoming a springboard for digital innovation and organizational agility. Higher education institutions (HEIs) are facing the problems with the increasing of participants, growing need of IT and infrastructure, education quality of provision, and affordable education services [1,2]. With the high rate at which IT technology changes, resource management optimization is a key concern for HEIs [3], not least because on-premise systems can only operate effectively when they receive adequate initial funding and resources, as well as dedicated and systematic maintenance regimes [4,5]. Institutions looking to compete in the new world need a flexible yet comprehensive digital transformation blueprint that integrates various technologies across the institution with CC being at its foundation. CC, as the current generation in computing technology of the fourth industrial revolution (IR 4.0), has emerged as the main standard of service and resource delivery [6], in which it has become an excellent alternative for HLIs to support cost reduction, quality improvement and, through this, educational sustainability [7] by providing the required infrastructure, software, and storage as a service [3]. Thus, CC has been adopted rapidly in both private and public organizations, including HEIs [3,8,9]. As the fifth most frequently used utility after gas, electricity, water, and telephone lines [10], the CC tracking poll from the International Data Corporation indicates that USD 370bn in CC will be used by 2022, corresponding to an increase in 22.5% in terms of 5-year compound annual growth [11].
However, while the subscription model of cloud services contributes to the growth of the overall market and makes it accessible for HEIs, a new set of challenges have arisen. This research addresses one such challenge that the cloud service providers face. The possibility of making such a decision to discontinue a cloud service provider is exacerbated by the low cost of switching between applications [12] and in general the competitive markets [4]. Therefore, the conceptualization of CC service in HEIs changes to a decision on ‘continuance’, rather than ‘adoption’.
Moreover, in subscription models offered via cloud-based education systems, it is possible for HEIs to switch vendors if they perceive greater benefits elsewhere. Thus, it is essential to understand the conceptual differences between adoption and continuance [13]. Hence, research on CC continuance have practical and artifact-specific motivations. Furthermore, theoretical research on organizational-level continuance is also scarce [14], particularly in HEIs [9,15]. Typically, continuance research has been undertaken at the individual user level; however, organizational continuance decisions are often made by senior IS executives or others in the organization who may not be intense users of the service in question [14]. For many of these executive decision makers, a strong influence may be attributed to factors that are insignificant for individual users (e.g., lowering organizational costs or shifting a strategic goal) [16].
Thus, to contribute to the body of knowledge on the organizational-level continuance, this study draws on the prior literature in organizational-level continuance to establish a conceptual model that extends and contextualizes the IS continuance model to improve our understanding of the determinants of CC continuance use in HEIs through the lens of the TOE framework (i.e., environmental, organizational, and technological influences). Following the establishment of the conceptual model [15], the model was validated by conducting a pilot study with senior decision makers, all of whom were asked about aspects of their organizations relating to CC services. Therefore, this study sought to address was the following: “What constructs influence the organizational-level continuance of CC in HEIs?” To address this question, our research relies on a positivist quantitative-empirical research design. In this research, the unit of analysis (i.e., a basic element of observation indicating who or what the researcher has generalized) [17] is the organization, and the organization-level phenomenon will be observed by individuals involved in organizational CC subscription decisions at an organization [17].
In general, we contribute to research in different ways. First, the most important contribution of this research constitutes to the body of knowledge within the IS field surrounding continuance phenomenon. In practical settings, many are concerned with reducing capital IT expenses [3,18,19] and IS services allow client organizations to select from various services that they can continue using or discontinue using [8,9]. Therefore, conducting research that focuses on the continuance of IS will play a critical role in theory and practice. Second, we develop a conceptual model that provides a clear perspective through which HEIs can answer the question related to their use of CC services: “should we go, or should we stay?” Third, the results of the full-scale research will assist IT decision makers in CC when seeking to optimize institutional resource utilization, or to commission and market CC projects. As a case in point, the results can serve as guidelines that cloud service providers will use to focus their efforts towards retaining customers. These can also be leveraged by clients to guide routine assessments over whether the use of a specific CC service should be discontinued. Fourth, the study is expected to contribute to developing the literature in the best available organizational-level continuance models for HEI settings. Last, in providing a model for CC continuance use, we provide a new explanation for organizations’ continuance use of novel technologies. Measuring the model constructs not only reflectively but also formatively would add little to the practical contribution of the study. Thus, further quantitative, and qualitative research about the conceptualized model and its relationships is needed.
The structure of the rest of this study is as follows, a literature review is given in the next section. Theoretical models are then identified and analyzed, a conceptual model for exploring continuance of CC in HEIs is proposed. In turn, the method is explained, and the preliminary results of the study are presented. Finally, the study’s results are discussed, their implications are examined, and the contributions of the research are outlined.

2. Background and Related Work

As a term, CC is defined diversely in the literature. In this paper, the definition of CC used is comparable to NISTs [6], which regards CC as the set of aspects that are common across all CC services. Hence, from the perspective of this paper, CC relates to the applications and shared services involved in the surveyed institutions through subscription-based models, whereby shared data servers or application activities are accessed.
In HEIs, CC has been identified as a transformative technological development [3]. This is because CC benefits from rapid IT implementation, especially for research, which compares favorably when considered against legacy software systems. Additionally, CC solutions can be exploited to assist in implementing socially oriented theories of learning, as well as cooperative learning [20]. CC resources can be used to create e-learning platforms, infrastructure, and educational services through the centralized provision of data storage, virtualization, and other facilities [21]. With these considerations in mind, CC, for certain HEIs, is essential, and many institutions rely on the technology to reduce costs, remain competitive, and satisfy learner and teacher requirements [22]. The accessibility and transparency of CC services mean that HEIs can utilize existing knowledge to their mutual benefit [23].
To examine CC use in HEIs, a systematic literature review (SLR) was undertaken. The following electronic databases were included in the literature search: Web of Science, IEEE Xplore, ScienceDirect, Scopus, ACM Digital Library, Emerald, and Springer [9]. Additionally, the following search terms were entered into each of these databases: (cloud OR “cloud computing”) AND (adoption OR usage) AND (education OR teaching OR learning)). The literature search revealed that many studies had been published in this area, and that the rate of publication had been increasing for the past few years. IS researchers have tended to investigate CC use in HEIs from the perspective of individuals [7,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49] or from the perspective of organizations [50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65].
For the last three decades, many researchers have sought to evaluate the success of information systems (IS). Theoretical and practical contributions have been discussed, many of which indicate that the factors underpinning IS success are multidimensional. Furthermore, several studies have contributed to CC continuance use in highly different settings [14,66,67,68,69,70,71,72,73,74,75,76]. Nonetheless, the factors that drive institutions to continue or end a CC subscription have yet to be clarified [14,16,66], specifically in HEIs [9]. Given that almost all CC service models rely on subscriptions [6], this is an unexpected finding. Therefore, the purpose of the current research was to assess the main factors that HEIs consider when deciding whether to continue their use of CC services. In Table 1, evidence pertaining to the continuance use of CC is synthesized.

Life Cycle of an Information System

This study belongs to the well-established stream of literature that has examined the phenomenon of “technology adoption”, which was initially defined by Rogers [81] as a five-step process. Since then, a range of models has been developed to extend Rogers’ preliminary work [82,83,84]. According to some researchers, adoption is a multi-phase process rather than a binary decision [85,86,87,88], and for some theoreticians, adoption occurs over seven rather than five stages [89,90]. However, most researchers agree that technology adoption operates across the following five stages: awareness, interest, evaluation, trial, and continuance. Several researchers advocated a four-phase model, which involved initiation, adoption, decision, and implementation (e.g., [81,91,92]). However, certain studies have concentrated their empirical understanding of technology adoption into a single phase (e.g., adoption or pre-/post-adoption) [14]. This perspective is consistent with the wide-ranging stages of technology adoption that have been investigated in previous studies [93,94,95] (see Figure 1). Certain studies have reported that, in terms of the methods that are available in technology adoption research, many are limited because they do not differentiate between changes in the significance of factors in the different phases of adoption [96]. Therefore, opportunities, as well as suitable research settings for exploring continuance as the last phase of adoption, have been limited.
In Table, previous studies have adapted various IS theories and empirically analyzed CC in different contexts from an individual or organizational viewpoint in the pre-adoption or post-adoption (i.e. continuance use) phase. However, no empirical study was found measuring the continuance use of CC in HEIs. Therefore, the main contribution of this study is to develop an instrument and conceptualize a model to measure the continuance use of CC in the context of HEIs. Beside this context-specific contribution, our study also reduces the gap of related to organizational IS continuance research.
A range of theories have been used to study adoption (e.g., UTAT or TAM), continuance (e.g., ISC), and discontinuance at the individual level. Contrastingly, scholars have used theories such as TOE, DOI, and social contagion to examine adoption from an organizational perspective. Additionally, the ISS and ISD models have been leveraged to investigate continuance and discontinuance respectively, at the organizational level. Table 2 provides an overview of the various theoretical approaches that have been used in the literature to examine the lifecycle of an IS. Dissimilar to studies that have focused on the individual level, those that have addressed continuance and discontinuance at the organizational level are few and far between [9,14,16,97].
After initially adopting an IS, a user decides to either continue or discontinue the IS adoption. Contrastingly, this is unlikely to retire or replace their on-premise CC services [106]. Nevertheless, since almost all CC services involve a subscription model, and since this study’s focal point is the issue of continuance, the aim of this study is not to examine factors that influence the use of CC services in HEIs. Instead, the main area of focus in the current study is the set of constructs that contribute to IS continuance. An implication of this is that it is possible to evaluate success and system performance, which is dissimilar to the pre-adoption phase in which only expectations can be used to estimate usage. This also enables the integration of post-adoption variables as predictors of IS adoption continuance, thereby exerting far-reaching impacts on model development.

3. Theoretical and Conceptual Background

In this section, we present the theoretical and conceptual background, focusing on the prior literature in organizational-level continuance to extend and contextualize the IS continuance model to improve our understanding of the determinants of CC continuance use in HEIs.

3.1. IS Continuance Model

The IS Continuance (ISC) model [101] was developed based on expectation confirmation theory (ECT) [100]. The model has been used widely in the field of marketing to examine the impact of user satisfaction on a user’s intention to continue their adoption of a technology [107,108]. As Figure 2 indicates, IS continuance behavior in the ISC model is informed by post-consumption variables, namely, the perceived usefulness and satisfaction. Bhattacherjee [101] applied several theoretical changes to fine-tune ECT theory to the ISC model.
The first of these changes relates to the pre-consumption antecedents of confirmation, namely perceived performance, and expectation. Specifically, both antecedents were removed from the model, and the researcher’s rationale for doing so was that their influences are addressed in other constructs (specifically, confirmation and satisfaction). The second change relates to the addition of an ex-post expectation variable, namely perceived usefulness. Noteworthily, ex-post expectation is critical in IS services and products, principally because expectations tend not to remain stable over time. Consistent with previous studies on initial IS use [98,109], the study conducted by Bhattacherjee [101] reported that perceived usefulness may continuously influence subsequence IS continuance use decisions. Resultantly, perceived usefulness was considered a novel determinant of satisfaction. The third change is that, regarding the usefulness–intention connection originally developed in TAM [98], the ISC model suggests that this may exist not only in the original use context but also in the continuance context. This is linked to the fact that continuance intention in humans can be attributed to a sequence of adoption decisions that are not related to factors such as timing or the behavioral stage [110]. Hence, perceived usefulness should have a direct impact on IS continuance intention to have an indirect effect on IS continuance intention via satisfaction. In the literature, the ISC model has primarily been employed to examine continuance use from an individual perspective. However, the model has been extended for organizational post-adoption studies (e.g., [72]); consequently, it is associated with a substantial level of external validity. Besides, the ISC model has been used to explain the continuance phenomenon in the education context (e.g., [111,112,113]).
The focal point of the ISC model is an individual’s continued acceptance of technology, whereas the purpose of this study is to address organizational continuance use. However, several researchers have extended the model for organizational post-adoption context (e.g., [72]). As suggested by the TOE model, a critical point of difference between organizational and individual continuance use settings is that the technology adoption decisions made by organizations are typically informed both by technology factors associated with individual beliefs (e.g., satisfaction) and by organizational factors such as external threats and opportunities. Therefore, to fine-tune ISC model to address the research problem, it is necessary to supplement factors in the organizational continuance context, especially those relating to organizational and environmental settings [72]. Given that researchers need to choose a theory or model that is appropriate for the research setting (in this case, continuance use at the organizational level), this study substitutes the perceived usefulness construct of the ISC model with logical reasoning. Perceived usefulness is generally considered the most relevant technological factor that informs IS post-adoption behavior in the ISC model, and a range of studies, including [72,114], have employed it as a baseline model. Nevertheless, the theory of planned behavior (TPB) [115] suggests that net benefits ought to be viewed as behavioral belief (that is, perceived usefulness) [116]. For this reason, the net benefit construct taken from the IS success model is used instead of perceived usefulness, thereby achieving an effective fit with the research setting.

3.2. IS Success Model

According to Gable, Sedera [67], the positive outcomes arising from IS are the ultimate “acid test”, and so the question that should be addressed is that of whether IS has been beneficial for the organization [117,118,119]. Other questions of interest are whether IS is worth retaining, whether it requires modification, and the impacts that it will deliver in the future. Hence, a continuance decision relating to IS can be regarded as something that is informed by IS success at several levels [14].
Numerous studies have been conducted on IS success (ISS), and the ISS model [105], as well as a revised version of this model [120], have been used extensively in the literature to explore the issue [121]. The use of ISS model [105] in this study stems from the following considerations: firstly, the model has been used in various research settings [14,67,72,122,123]; secondly, it is easy to communicate the results of the model due to the systematic nature of the included dimensions; and thirdly, the model is not narrowly applied as a framework for measuring success, and so it has a high level of external validity.
The success dimensions chosen by a researcher must be based on the research setting and research problem; therefore, in this study, two dimensions are removed based on logical reasoning: namely, use and service quality. In terms of the use construct, it has been critiqued in the literature for several reasons [67,124,125,126,127], particularly in terms of the way it performs an intermediate function that lies between quality and impacts, and as such does not operate as a measure of success [128]. Additionally, the adoption of system use construct of IS success has been identified as unsuitable in previous studies of system performance [14]. Regarding the service quality construct, this was removed for the following reasons: firstly, it is comparable to the complete idea of IS success in several ways, where system and quality constitute a “functional” aspect, while the effects constitute the “technical” aspects in the “operational” IS (in this case, the system is considered a set of services); and secondly, in order to assess the service provider’s services, a narrow perspective of service quality is involved. In CC service research, an assessment of this kind would be an antecedent rather than a measure. Given these considerations, several constructs, namely information quality, system quality, and net benefits, are integrated into the conceptual model of this study.

3.3. IS Discontinuance Model

Additional factors that influence organization persistence, particularly in the context of CC use in HEIs, were identified in this study for an explanation of CC continuance use in the organization-level IS post-adoption context. This resulted in the inclusion of system investment as an organizational construct, as well as technical integration as a technological construct [16]. Each of these constructs relates to the commitment that an organization has to a subscription-based technology (e.g., CC) [14].
Regarding technical integration, this study sought to examine whether the features of a technology impacted continuance decisions. Specifically, in view of the principal objective of CC, the study investigated the degree to which HEIs would benefit from the sophistication provided by the features of CC services. However, the positive impacts of the features of the technology are not assured in many cases. For example, certain organizations may not have the expertise needed to exploit the complexity of the technology [129,130,131], meaning that they cannot integrate it into the overall functioning of the firm. As a result of this, the organization would be compelled to discontinue their use of CC services [14,78].
In terms of the system investment variable, this—as a source of behavioral persistence—has often been referred to as a “sunk cost” in the literature [132]. Among managerial personnel, it is common to invest continually in an area despite reasonable evidence for not doing so. The notion of a sunk cost becomes relevant when the cost of acquisition can be regarded as a capital expenditure (CapX). System investment studies have assessed the role played by CapX in the formulation of computer software prices in the context of switching between software solutions [133], as well as the impact on succeeding decisions in terms of IS outsourcing [134]. In the case of CC, system investment is a noteworthy variable because of the technology’s low barriers to entry and minimal overhead costs [135]. In view of this, it is not unreasonable to view CC services as any other utility that can be turned on and off at will [135,136]. However, it is critical to recognize that many CC services are typically associated with significant implementation costs. A key implication of this is that system investment is fundamental in CC continuance in HEIs.
Regarding competitive pressure, this refers to the pressure that an institution’s leadership may feel regarding the performance-related abilities that its competitors are gaining through the exploitation of CC services (e.g., an increase in student assessment outcomes due to the use of CC platforms) [72,137,138].

3.4. TOE Framework

In the context of the TOE framework [102], it is possible to divide the constructs determining behavior related to CC continuance into the following contextual areas: firstly, organizational context; secondly, technology context; and finally, environmental context. However, it is notable that the framework itself does not include information about these constructs. In terms of the effect of the technology context on CC adoption behavior, this refers to the set of technology-related factors that feed into an organization’s decision to adopt an innovative IS [139]. As for the organizational context, this is concerned with the way in which various factors affect IS adoption behavior. These factors include available resources, opportunities for collaboration, profile characteristics, peer influence, internal communication, organizational culture, formal and informal linking structures, human resources quality, firm size and scope, and the internal social network. Finally, the environmental context demonstrates that an organization’s IS adoption is significantly impacted by constructs that lie outside of its direct control (e.g., competitors, government regulations, and supply chains) [102]. In view of these considerations, it is clear that the TOE framework can play an effective role in identifying non-technology-level factors that have not been considered in other studies on consumer software (e.g., constructs relevant to external circumstances) [140]. Additionally, the TOE framework helpfully interprets the notion of adoption behavior based on the following technological innovations: firstly, innovations applied for technical tasks (i.e., type 1 innovations); secondly, innovations relating to the business administration (i.e., type 2 innovations); and thirdly, innovations integrated into core business procedures (i.e., type 3 innovations) [141].
Along with the technological and organizational variables of continuance use, which were derived from models of IS continuance, discontinuance, and success, constructs were identified that affect organizational and environmental persistence, especially insofar as they relate to CC in HEIs. As a result of this process, collaboration was identified as an organizational variable [42,142,143,144], while regulatory policy [145,146,147] and competitive pressure [72,145] were identified as environmental variables. Collaboration tasks lie at the heart of HEIs, and collaboration can be conceptualized as the ability of CC services to facilitate communication among stakeholders [42,142]. In the case of digital natives, CC services play a vital role in effective collaboration [148,149]. Table 3 presents a mapping matrix for the continuance use constructs and theories, each of which has been obtained from the extant and related literature [150,151].

4. Research Model and Hypotheses

A robust theory of organizational level continuance of CC is yet to be developed [14]. Therefore, based on the theoretical and conceptual background outlined previously, this research used a method that complements and contextualize existing constructs in the IS continuance model through the lens of the TOE framework. We extended the IS continuance model [101] using constructs from dominants models in innovation organization-level IS post-adoption research which are IS success model [105,120] (i.e., net benefits, system quality, and information quality) and IS discontinuance model [16] (i.e., technical integration, system investment, and competitive pressure). To keep our research model coherent and relevant, we identified additional contextual constructs from the literature as constructs to predict continuance use of CC in educational context (i.e., collaboration and regulatory policy). To structure our model, we took a technological–organizational–environmental approach by applying the lens of the TOE framework [102] to our research model (i.e., Technology context: net benefits, system quality, information quality, and technical integration; Organizational context: system investment, and collaboration; and Environmental context: regulatory policy, and competitive pressure). We also formulated related hypotheses to clarify our research agenda, emphasize research areas that need further investigation, and acquire requisite knowledge on CC continuance use. Figure 3 provides an overview of the original IS continuance model and this study’s proposed extensions. The model is grounded at the organizational level of analysis [156], and the smallest unit of analysis is an individual CC.
Based on a positivist, deterministic philosophical paradigm, a priori assumptions in the form of hypotheses were established for later statistical analysis to facilitate model validation. The propositions focus on the link between the independent variables encompassing the IS continuance model, IS success model, IS discontinuance model, and TOE framework, and the dependent variable, namely CC continuance use.
The relationships among perceived usefulness and confirmation, continuance intention, and satisfaction, as noted by Bhattacherjee [101] in the context of system acceptance, are relevant for investigating CC continuance in HEIs. In this study, perceived usefulness was substituted by net benefits, which is a cognitive belief relevant to IS use [157]. In the context of TAM, perceived usefulness is considered a user’s belief towards system usefulness [98]. Whereas in the organizational context, net benefit is considered a belief about the degree to which IS promotes organizational objectives. This definition is aligned with other organizational-level definitions [157,158]. We thus propose the following hypotheses:
Hypothesis 1 (H1). 
An institution’s satisfaction level with initial CC adoption positively influences its CC continuance use.
Hypothesis 2a (H2a). 
An institution’s extent of confirmation positively influences its satisfaction with CC use.
Hypothesis 2b (H2b). 
An institution’s net benefits from CC use positively influence its satisfaction with CC use.
Hypothesis 3a (H3a). 
An institution’s net benefits from CC use positively influence its CC continuance use.
Hypothesis 3b (H3b). 
An institution’s extent of confirmation positively influences its net benefits from CC use.
The relationships among system quality, information quality, and continuance intention [120] in the context of IS success can also be applied to CC continuance use in HEIs. Prior studies have examined the relationships among system quality, information quality, and satisfaction [77,159,160,161,162,163,164,165]. Hence, it follows:
Hypothesis 4a (H4a). 
System quality positively influences an institution’s satisfaction with CC use.
Hypothesis 4b (H4b). 
System quality positively influences an institution’s CC continuance use.
Hypothesis 5a (H5a). 
Information quality positively influences an institution’s satisfaction with CC use.
Hypothesis 5b (H5b). 
Information quality positively influences an institution’s CC continuance use.
The relationships among technical integration, system investment, and discontinuance intention [16] can also be applied to CC continuance use in HEIs. Thus, we propose:
Hypothesis 6 (H6). 
Technical integration positively influences an institution’s CC continuance use.
Hypothesis 7 (H7). 
System investment positively influences an institution’s CC continuance use.
Presently, the success of HEIs depends in large part on effective collaboration. This is noteworthy because, by leveraging CC, it is possible for HEIs to exploit new modes of communication between key stakeholders [42,142]. Digital natives, many of whom are students within HEIs, now require the Internet to undertake daily tasks [148,149], and also to participate in online group activities (e.g., socializing, group studying, and so on) [166]. In order to satisfy student requirements, it is necessary to practitioners within HEIs to understand the various ways in which knowledge and content can be delivered to them [167]. In view of this, it is important to know what types of expectations students have, and to understand how technology can be leveraged or incorporated into teaching activities to meet these expectations. Hence, it is not unreasonable to suggest that the competitiveness of a HEI depends on its utilization of novel technology to satisfy student needs, and to enable streamlined collaboration and communication [168]. Taking the context into account, we thus predict:
Hypothesis 8 (H8). 
The collaboration characteristics of CC services positively influence an institution’s CC continuance use.
Regulatory policy is another critical consideration that is likely to affect an organization’s decision to use, or to continue using, a technology. One of the reasons for this is because the regulatory policies established by a government play a key role in setting laws relating to the use of certain technologies (e.g., CC) [147,169,170]. For example, the authors of [145,146,147] discussed how regulatory policies have shaped adoption trends in CC in various research settings. Taking the context into account, we therefore hypothesized:
Hypothesis 9 (H9). 
Regulatory policy positively influences an institution’s CC continuance use.
Competitive pressure refers to the pressure that an institution’s leadership may feel regarding the performance-related abilities that its competitors are gaining through the exploitation of CC services (e.g., an increase in student assessment outcomes due to the use of CC platforms) [72,137,138]. In the literature, several scholars have noted that competitive pressure plays a determining role in influencing CC use in multiple research settings [72,153,170,171,172,173]. Taking the context into account, we thus predict:
Hypothesis 10 (H10). 
Competitive pressure positively impacts an institution’s CC continuance use.
The research model will be used to examine CC continuance use at the organizational level in HEIs. Nevertheless, institutions are a key element of the CC ecosystem, in which diverse sets of actors are involved (e.g., government agencies, public organizations, and researchers). The proposed model can be referenced by CC actors in HEIs as a basis for cooperating with stakeholders, which is a prerequisite for the creation and provision of improved products and services.

5. Methodology

A positivism quantitative survey approach is warranted to address the research objectives illustrated through the research questions and the hypotheses. Instrument development, data collection and data analysis mechanisms are discussed in detail below.

5.1. Research Design

This research relies on the positivist philosophical paradigm, as well as the collection and analysis of quantitative data. The rationale for this decision stems from the way in which the approach permits a cost-effective and timely research process [174]. Furthermore, a natural way to address the study’s research questions and test the hypotheses involved using a quantitative survey, since this yielded a direct approach to comparing dependent and independent variables. According to Creswell and Creswell [174], describing causal relationships between variables is only possible when a non-experimental correlation research design with quantitative data is utilized.
Since this study relies on the theoretical foundation of continuance, established guidelines were followed to develop the research instrument (i.e., item formulation, scale development, and instrument testing) [175,176]. Following the model development, a pool of survey items was derived from the literature; then, content validity was tested by examining the extent to which every item reflected its nominated construct. Additionally, consistent with recommendations reported by Kelley [177] and McKenzie, Wood [178], the expert review evaluation process was undertaken to establish measurement representativeness, clarity, and comprehensiveness. Afterwards, pilot test was undertaken to assess the validity and reliability of the research instrument. Instrument development and data collection were the two main methodological activities undertaken in this research, and these will be described later in this manuscript.

5.2. Instrument Development

Survey research is an essential and complex process used to ensure research objectives are met [179]. Therefore, designing and selecting the correct instrument for survey is fundamental as it should be answering the research questions on what is to be measured and how it is to be measured—in this case, the construct validity and construct reliability, respectively [179,180].
In this study, both reflective and formative measures were used to test the research model, as shown in Table 4. Formative measurements were taken of net benefits, information quality, and system quality, mainly because formative measurement gives rise to actionable and specific concept attributes [181], that is specifically interesting from a practical viewpoint. In the context of formative measurements, a single indicator’s weight is used to draw practical insights about the criticality of certain details, thereby generating information that guides practical enforcement in terms of the system characteristics (e.g., “overall system quality is high” (reflective) vs. “system is easy to use” (formative)). Dissimilar to the formative constructs (e.g., system quality, net benefits, and information quality), the purpose of which is to assess an information system’s success, it is possible for the reflective constructs to be given historically. Hence, the measurement of these constructs involved well-validated reflective scales [16]. In the case of the formative instrument, this was developed based on the guidelines of Moore and Benbasat [175]), which was combined with recent processes in scale development [182,183,184]. For the formative measures, the objective was to achieve mutual exclusivity and parsimony, and to identify one measure that would be the most appropriate for inclusion in the model. As a case in point, parsimony and accuracy are critical considerations for all measures in a formative model, particularly since every dimension and measure is essential. Consequently, there should only be a small level of overlap, and no unnecessary measures or dimensions should be present. Such attention is considered vital in selecting the tentative measures.
It is worth drawing attention to the fact that the questionnaire scales were adapted from the prior literature (i.e., from well-validated studies on the IS continuance model [101], IS success model [105,120], IS discontinuance model [16], and TOE framework [102]) (See Appendix A). The instrument’s feasibility, consistency of style and formatting, readability, and linguistic clarity [185,186], were evaluated in interviews with academic researchers (n = 2) with experience in questionnaire design. Their feedback on the general design and measurement scales were requested for improving the usability of the questionnaire. A content-based literature review approach recommended by Webster and Watson [187]) was used for instrument conceptualization and content specification, in which constructs have been clearly defined (See Table 3). The next stage involved producing an item pool, the purpose of which was to represent every aspect of the construct without overlap [183]. Notably, the elimination of a measure from a formative indicator model risks leaving out a relevant part of the conceptual domain (or, for that matter, changing a construct’s meaning). This is because the construct is the set of all indicators [188], and also because maintaining irrelevant items will not have the effect of introducing bias into the results when examining the data with PLS [181]. In view of this, every dimension that was identified was retained and changed into an item.
The questionnaire comprises three parts: firstly, a preamble; secondly, a demographic section; and finally, a section on the constructs relating to continuance use of CC in HEIs. For the first section, we applied the key informant approach [198] in which two questions were used to eliminate participants from the sample: first, checking for participants whose institutions had not yet adopted CC services; and second, checking for participants who do not participate in the ICT adoption decision. Participants eligible for inclusion in the study, they can complete the next two sections of the questionnaire. In the second section, demographic data about each institution’s age, faculty, student population, years of CC service adoption, and type of CC service model were gathered. A service provider variable was also measured based on asking the respondent who their CC service provider was (e.g., Oracle, Microsoft, Google, Salesforce, and Amazon, among others). Notably, the service providers did not affect the final dependent variable. In the third section of the questionnaire, each item sought to address an aspect of the research question, particularly measuring the information of constructs that led towards an organizational continuance use. These included satisfaction (S), confirmation (Con), net benefits (NB), technical integration (TE), system quality (SQ), information quality (IQ), system investment (SI), collaboration (Col), regulatory policy (RP), and competitive pressure (CP). As shown in Appendix A, a 5-point Likert scale, ranging from “strongly agree” to “strongly disagree”, was used to measure each item. Additionally, SAT items were measured on a 5-point Likert scale with different options (e.g., 1 = very dissatisfied to 5 = very satisfied; 1 = very displeased to 5 = very pleased; 1 = very frustrated to 5 = very contented; and 1 = absolutely terrible to 5 = absolutely delighted).
Further, the instrument development process took into consideration the debate surrounding the practice of gathering perceptual data on both the dependent and independent variables from a single respondent [199]. In this debate, a central issue is that of whether the practice may result in excessive common method variance (CMV). Nevertheless, some studies indicate that CMV is a greater problem for abstract constructs (e.g., attitude) when compared to concrete measures (e.g., those linked to IS success in this research) [200]. Besides, it has been noted in the literature that the constructs of IS success are not highly susceptible to CMV [201]. Moreover, CMV is not a major concern for formative constructs because the items do not have to co-vary [14]. Furthermore, in the process of operationalizing the research instrument, CMV can be further reduced by neglecting to group the items from reflective constructs under the associated construct headings [199,200].

5.3. Data Collection

A continuance decision within an organization must be reached unanimously, and so the choice made in this research to use an individual as a representative of an organization (even inside a team) is reasonable [14]. Given the adopted survey methodology, individuals reported on organizational properties. Hence, ensuring that each participant had the requisite authority and knowledge to contribute data was critical. Therefore, the key informant approach [198] was utilized in this research. In the introductory section of the questionnaire, participants were informed that the study only sought to recruit key decision makers within organizations. Furthermore, the participants were asked directly to withdraw from the study if they were not directly involved in their institution’s decision to continue using CC services.
For data analysis, descriptive statistics were used to examine data from the first and second sections of the questionnaire. The survey respondents had been recruited through online and offline distribution channels to reach participants with the questionnaire. An invitation from the institutional email with reminders where sent online to 50 people, 34 completed and submitted the questionnaire with a response rate of 68%. Of the 12 surveys handed over face to face, only four surveys were completed and returned successfully with a response rate of 33.3%. This indicates that face to face method consumes more time and effort than the online approach. For most scholars, a pilot study sample size of 20–40 is reasonable [202,203,204,205,206], and so our pilot study’s reliability statistic was based on 38 completed questionnaires.

5.4. Data Analysis

Cronbach’s Alpha and composite reliability (CR) tests were used to measure instrument reliability. Each test was undertaken using statistical package for the social sciences (SPSS). For the purpose of validating the measurement and structural model, structural equation modelling (SEM) was applied to the pilot data with SmartPLS 3.0 [207]. A variance-based technique was used to analyze the structural model, and this decision was made for several reasons: firstly, the partial least squares (PLS) method is effective for small-to-moderately sized samples, and it provides parameter estimates even at reduced sample sizes [208,209]; secondly, PLS is viable for exploratory research [210], particularly when examining new structural paths in the context of incremental studies that extend previous models [211], or when the relationships and measures proposed are new or have not been extensively examined in the prior literature [212,213]; and thirdly, the variance-based approach in PLS is effective for predictive applications. Therefore, since the study’s objective was to identify the factors underlying organizational-level CC continuance use (i.e., not to examine a particular behavioral model), PLS was a suitable choice [214].

5.5. Prototype Development and Evaluation

Drawing on the literature and theories of the IS continuance model, IS success model, IS discontinuance model, and TOE framework; the present study purposes a continuance use measurement prototype (i.e., system/application) to validate the research model.
In this phase, a prototype is developed, and evaluation is conducted through a survey. The main purpose of the prototype development is to apply and validate the proposed model through the evaluation and of the level of match between the model and the prototype in the real world. The prototype developed for the present study is in accordance with the processes proposed by Sommerville [215]. The processes involved in this phase are shown in Figure 4 and detailed below.

5.5.1. Establish Prototype Objectives

The objectives of developing the prototype are: (i) to assist cloud service providers and ICT decision makers at HEIs for evaluating the continuance use of CC services in HEIs; (ii) to provide a guideline on the vital requirements needed for ensuring a successful use of the CC service in HEIs.

5.5.2. Define Prototype Functionality

The fundamental criterion to quantify the success of a software system is the extent to which it pleases its customers [48]. A software/system requirements specification (SRS) is an explanation of a proposed software system that meets different kinds of stakeholder needs. The system requirements specification (SRS) suggest two main requirements of a system: functional and non-functional requirements. The functions relevant for the development of the prototype are derived from the literature. Accordingly, specific features of the prototype that are significant to the proposed prototype should listed and elaborated when full implementation in carried out.

5.5.3. Develop Prototype

The prototype will be built using the React JS web application framework. It is used collaboratively with Hyper Text Markup Language (HTML), Cascading Style Sheets (CSS), jQuery, Node js web server and MySQL Database. A flow chart design use case diagram, content, and navigation structure as well as data dictionary will be designed to build the prototype.

5.5.4. Evaluate Prototype

This research will implement the user acceptance test to validate the proposed model by evaluating the overall usability and acceptability of the prototype. A structured survey will be conducted at the end of the development process once the prototype is fully developed. The acceptance test used is based on the Perceived Usefulness and Ease of Use (PUEU) instrument by Davis [98] which has been based on the Technology Acceptance Model (TAM) [216,217]. Perceived usefulness and perceived ease of use are hypothesized to be fundamental determinants of user acceptance and system use [98,218]. A sample of respondents consisting of ICT decision makers at HEIs will be selected to participate in this survey.
According to Faulkner [219], studies to evaluate a prototype of a novel user interface design reveals severe errors quickly and, therefore, often require fewer participants. The literature suggests that 3 to 20 participants provide valid results [220]. PUEU consists of 12 questions with 7 scales from unlikely (1) to likely (7). To analyze the PUEU test, descriptive analysis (mean, standard error, median, mode) using SPSS will be carried out. The questions of the survey are divided into two parts. The first part is on demographics while the second on prototype perceived usefulness and ease of use. The PUEU test has been widely implemented to explain on the overall user acceptance towards an information technology (e.g., [221,222,223,224,225,226]).

6. Preliminary Results

A pilot study was conducted to increase the consistency of the measures used throughout the research. Noteworthily, the main objective associated with a pilot study is to ensure the validation of the initial instrument, and to identify any inconsistencies that could undermine the accuracy of the results [227].

6.1. Validity and Reliability of the Survey Instrument

Face validation, content validation, and a pilot study were undertaken to ensure the validity and reliability of the survey instrument. Face validity refers to the procedure of assessing an instrument based on feasibility, consistency of style and formatting, readability, and linguistic clarity [185,186]. In this study, face validity was evaluated in interviews with academic researchers (n = 2) with experience in questionnaire design. Regarding content validity, this refers to the extent to which a survey instrument measures what it intended to measure [177,178,227], and an expert panel was assembled to test it in this study. Although only three experts are the minimum for a content validity panel [228,229], five were recruited for the present study (three academics involved in IS and CC, as well as two industry practitioners). Online and offline distribution channels were used for the questionnaire in order to assess the constructs, and the measurement of items also took place with 4-point scale suggested by Davis [230]. A textbox was provided for every question, thereby ensuring that the participants had enough free space in which to provide comments. The participants identified areas for potential changes (e.g., “re-word this sentence”), and to quantify the relevance of every item, the average congruency percentage (ACP) [231] was computed. Noteworthily, the threshold value of ACP recommended by DeVon, Block [186] was achieved (i.e., 90%). In this regard, it is worth drawing attention to the fact that the questionnaire’s items were adapted from the prior literature of the adopted theoretical models (i.e., the IS continuance model [101], IS success model [105,120], IS discontinuance model [16], and TOE framework [102].
Reliability testing involved measuring each construct’s attribute based on the Cronbach Alpha reliability test statistic, and this decision was made owing to the statistical test’s widely accepted status in the literature in terms of internal reliability (i.e., the ability of an object of measurement to yield the same results on independent occasions). Several studies have inspected the minimum threshold for Cronbach Alpha as being above or equal to 0.7, while values less than 0.6 have been inspected to have a lack of reliability [232]. Cronbach Alpha values are given for every construct, as well as inter-item correlation values, in Table 5. According to Briggs and Cheek [233], inter-item correlations in a reflective measurement scale offer data pertaining to the scale’s dimensionality. Furthermore, mean inter-item correlation is distinct when compared to a reliability estimate because it is not impacted by scale length. Resultantly, it provides a less ambiguous sense of item homogeneity. When the mean inter-item correlation does not exceed 0.3, this suggests that the item is not correlated with others in the same construct in a strong way [234]. At the same time, inter-item correlations that exceed 0.9 are indicative of multicollinearity issues [235]. For Cronbach Alpha values that exceed 0.7, this suggests favorable internal consistency in terms of the items on the scale [236]. Specifically, Cronbach Alpha values that exceed 0.9 are considered “excellent”, while values exceeding 0.8 and 0.7 are considered “good” and “acceptable”, respectively [237]. For values exceeding 0.6, these are considered “questionable”, while values lower than 0.5 are considered “unacceptable” [237]. In this study, relatively only one of the inter-item correlations fell below the recommended value of 0.3, meaning that the implications associated with their elimination from the survey instrument were considered. Given their weak correlations with other items in the measurement of confirmation construct, one item was removed. Additionally, because the resulting Cronbach Alpha values would not have increased after the deletion of the item, the item was retained (see Table 5).
Drawing on partial least squares (PLS) analysis, the validity of the reflective and formative models was evaluated based on the recommendations of Hair Jr, Hult [208], which are discussed below.

6.2. Reflective Measurement Model Evaluation

Assessment of the reflective measurement model involved taking an estimate of internal consistency, along with convergent and discriminant validity (see Table 6). The reliability of the instrument was acceptable, with reflective factor loadings in excess of 0.505 (i.e., greater than the recommended level of 0.5) [209]. CR was also acceptable, where every construct’s value was greater than 0.852 [238].
In the case of convergent validity, this was computed as the average variance extracted (AVE) for every construct, and it was greater than 0.5 in each case [239]. Every square root for the AVEs was greater than the corresponding latent variable correlation, thereby indicating a satisfactory level of discriminant validity (see Table 7).

6.3. Formative Measurement Model Evaluation

The three-step process suggested by Hair Jr, Hult [207] was used to evaluate the formative measurement model (see Table 8). At the outset, convergent validity was examined, which refers to the degree to which a given measure is positively correlated with other measures in the same construct [207]. In order to test convergent validity, redundancy analysis can be performed [240]. In this study, every construct was associated with satisfactory convergent validity, and the path coefficients ranged from 0.763 to 0.884 (i.e., greater than the recommended level of 0.7) [207]. In the second step, collinearity issues were addressed by computing variance inflation factors (VIFs) for every indicator [241], which can identify multicollinearity problems among the measures. Most VIFs in this study did not exceed the recommended level of 5 [242], but several were unreasonably high. Nevertheless, because formative measurement models are based on the regression of the formative construct against its measures, the stability of the measures’ coefficients is not only affected by the strength of the intercorrelations but also by the sample size. Given that the present process is concerned with testing the initial model, these indicators will not be considered until full-scale data are used to test the model.
For the final step of the three-step process, indicators are evaluated in terms of their significance and relevance, specifically by utilizing the initial research model. A range of formative indicators were not significant at the level of 10%. This is consistent with expectations because, as noted by Cenfetelli and Bassellier [243], the greater the number of indicators, the higher the likelihood that the indicators will not be significant. This is because each indicator competes to account for the variance in the target construct. Mathieson, Peacock [181] used in their study a range of formative indicators to assess perceived resources, and four out of seven were identified as insignificant. Additionally, Walther, Sedera [14] noted that system quality is associated with three significant indicators, while information quality and net benefits only have one and two significant indicators, respectively. Critically, however, lack of significance for an indicator should not be viewed as suggestive of a lack of relevance. If an indicator’s outer weight is not significant, but its outer loading is considerable (i.e., greater than 0.5), it is necessary to interpret the indicator not as absolutely important but rather as important [207]. In the study undertaken by [207], it was noted that, if the theory-driven way the construct is conceptualized is strongly supportive of the decision to keep the indicator (e.g., based on the judgements of specialists), then it ought not to be eliminated in most cases. An additional issue relates to the phenomenon of negative indicator weights [243]. Interpretations should not view these as items that negatively impact the construct; rather, they should be considered as correlated to a higher degree with indicators of the same measure as opposed to the construct they measure.

7. Discussion and Conclusions

The goal of this research paper was to propose a conceptual model that extends and contextualizes the IS continuance model to improve our understanding of the organizational level continuance of CC in HEIs. Drawing on the prior literature in organizational-level continuance (i.e., IS success and IS discontinuance models), the proposed research model extends the IS continuance model by the following constructs: net benefits, system quality, information quality, technical integration, system investment, collaboration, regulatory policy, and competitive pressure through the lens of TOE framework. The results of a pilot study, conducted through a survey with ICT decision makers, and based on the proposed conceptual model, indicate that the instrument is both reliable and valid, and therefore pave the way for further research.
The pilot study’s results lend valuable support to the model constructs and instrument in assessing CC continuance use at HEIs. Additionally, analysis with Cronbach’s Alpha indicates that the model constructs have satisfactory reliability [238], while convergent and discriminant validity for each of the constructs has been established using AVE and CR tests. Based on the feedback received from the pilot study’s participants, as well as from the statistical analysis, a revised survey instrument was devised for the full-scale research. An assessment of the reflective measurement model was undertaken by computing internal consistency, discriminant validity, and convergent validity. In each case, the instrument was associated with satisfactory results. Formative measures were evaluated based on convergent validity, collinearity issues, and significance and relevance, revealing satisfactory performance in terms of convergent validity. As for collinearity issues, these were examined by computing the variance inflation factors (VIFs) for every indicator, most of which did not exceed the maximum threshold of 5 [242]. Certain VIFs were greater than the required value, but because formative measurement models are based on regression (which is informed both by measure intercorrelations and sample size), these indicators will be considered when using full-scale data for the model. Additionally, indicators were examined in terms of significance and relevance by considering the initial research model.

7.1. Theoretical Contributions

This study provided empirical literature within IS, especially CC; in addition, this study provided assessment for CC adoption in HEIs, and more revitalization of the CC and intent of decision makers to utilize CC in HEIs.
One of the most contributions of this research constitutes to the body of knowledge within IS field surrounding continuance and discontinuance. This study provides an extensive model that extends and contextualizes the IS model using three dominant models in organizational-level continuance (i.e., IS success model, IS discontinuance model, and TOE framework) to improve our understanding of the organizational-level continuance of CC in HEIs. Furthermore, the results of the full-scale research will aid in developing a context-specific model that can be exploited to examine CC continuance in HEIs. Furthermore, it will play a valuable role in extending the academic literature on CC with empirical evidence. In addition, the study will form the basis of a final model that could be applied to examine continuance with other novel technologies within the sphere of education, as well as in other sectors. Finally, the study is expected to contribute to developing the literature in the best-available organizational-level continuance models for HEI settings, as well as other similar domains.

7.2. Practical Implications

From the practical implications’ viewpoint, this study provides potential implications for practitioners and cloud providers. Given that almost all CC service models rely on subscriptions [6], the current research purposed a conceptual model to assess the main factors that HEIs consider when deciding whether to continue their use of CC services. In practical settings, many are concerned with reducing capital IT expenses [3,18,19], and IS services allow client organizations to select from various services that they can continue using or discontinue using [8,9]. Therefore, conducting research that focuses on the continuance of IS will play a critical role not only in theory but also in practice. Furthermore, measuring the model constructs not only reflectively but also formatively would add little to the practical contribution of the study.
Besides, this study provides potential implications for decision makers. The research results will also assist IT decision makers in CC when seeking to optimize institutional resource utilization, or to commission and market CC projects. As a case in point, the individual weightings associated with the model’s constructs can serve as guidelines that software vendors will use to focus their efforts towards retaining customers. These weights can also be leveraged by clients to guide routine assessments over whether the use of a specific CC service should be discontinued.

7.3. Limitations

This study has some limitations that will bring about the focus of subsequent research. First, similar to organizational studies, possible bias to the results may occur from representing individual views rather than a shared opinion within the HEIs. This can be addressed if the constructs in the full-scale study are used for longitudinal evaluations rather than just cross-sectional evaluations, then the client organizations will be able to learn about critical “pain-points”. Second, the proposed model is intended for organizational-level usage. However, CC ecosystems involve operationalizing constructs such as IT infrastructure availability and computer sophistication [244,245]. Future research will have to take additional perspectives to understand continuance on an organizational level. Finally, the proposed model contextualized the IS continuance model based on previous literature in organizational-level continuance to better suite HEIs. However, future research may have to consider contextual constructs to understand continuance on HEIs.

7.4. Future Research Directions

The model and instrument development activities covered in this paper are the first step towards a larger initiative that seeks to examine the factors affecting the organizational continuance use of CC in HEIs. The validated instrument that was piloted in this study will be used to gather data from a large sample of IT decision makers in Malaysian HEIs. The study will analyze the impact of every construct in the proposed model on the continuance use of CC, and their significance in the model will be validated based on a test of the proposed hypotheses, where SEM and maybe other analytical approaches (e.g., Artificial Neural Network) will be used. A final model will be generated, which is expected to have considerable value in future organizational-level continuance studies of various technologies in the coming years (e.g., CC continuance in SMEs and government departments). Therefore, further quantitative and qualitative research about the conceptualized model and its relationships is recommended. A prototype should be developed and evaluated to validate the research model. A user acceptance test using the Perceived Usefulness and Ease of Use (PUEU) instrument will conducted and demonstrated the overall feasibility and acceptability of the prototype. As a final phase of the research design [180], conclusions should be drawn from the final model through the interpretation of results confirmed by empirical analysis and the evaluation of prototype developed.
A world with current modern technologies that keep evolving in which organizations and individuals keep adopting the evolved technologies requires further assessment for its sustainability and success use. Thus, researchers are required to always examine these innovations by investigating the continuance use of future technologies. In this regard, the fourth industrial (IR 4.0) revolutions provide a dialectical, intricate, and intriguing opportunity to all aspects of our life, in which the society would be changed for the better. As a case in point, education in the IR 4.0 era (Education 4.0) is driven by technologies such as artificial intelligent (AI), augmented reality (AR), Internet of things (IoT), big data analysis, CC, 3D printing, and mobile devices, which can promote the way of teaching, research, and service and change the work area from task-centered to human-based [48,153,246,247,248,249,250,251]. Many of the IR 4.0 technologies have been adopted and used in many sectors; therefore, further investigations on the continuance use of those technologies may gain the attention of the researchers. Finally, future research on how technology advancements could leverage learning and teaching process to achieve sustainability in HEIs is recommended.

Author Contributions

Conceptualization, Y.A.M.Q.; Data curation, Y.A.M.Q. and R.A. (Rusli Abdulah); Formal analysis, Y.A.M.Q., R.A. (Rodziah Atan) and Y.Y.; Methodology, Y.A.M.Q., R.A. (Rusli Abdulah) and Y.Y.; Project administration, R.A. (Rusli Abdulah), Y.Y. and R.A. (Rodziah Atan); Resources, R.A. (Rusli Abdulah), Y.Y. and R.A. (Rodziah Atan); Supervision, R.A. (Rusli Abdulah), Y.Y. and R.A. (Rodziah Atan); Validation, Y.A.M.Q. and R.A. (Rodziah Atan); Writing—original draft, Y.A.M.Q.; Writing—review & editing, Y.A.M.Q. and R.A. (Rusli Abdulah). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Research Management Center (RMC), Universiti Putra Malaysia (UPM), UPM Journal Publication Fund (9001103).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Constructs and their Measurement Items.
Table A1. Constructs and their Measurement Items.
ConstructsReflective/FormativeMeasurement ItemsTheories
ItemsAdapted SourcePrevious Studies
CC Continuous IntentionReflective(1 = Strongly Disagree to 7 = Strongly Agree)
CCA1: Our institution intends to continue using the cloud computing service rather than discontinue.
CCA2: Our institution’s intention is to continue using the cloud computing service rather than use any another means (traditional software).
CCA3: If we could, our institution would like to discontinue the use of the cloud computing service. (reverse coded).
[101][45,72,76]ECM & ISD
Satisfaction (SAT)ReflectiveHow do you feel about your overall experience with your current cloud computing service (SaaS, IaaS, or PaaS)?
SAT1: Very dissatisfied (1)–Very satisfied (7)
SAT2: Very displeased (1)–Very pleased (7)
SAT3: Very frustrated (1)–Very contented (7)
SAT4: Absolutely terrible (1)–Absolutely delighted (7).
[101][45,72,76]ECM
Confirmation (Con)Reflective(1 = Strongly Disagree to 7 = Strongly Agree)
CON1. Our experience with using cloud computing services was better than what we expected.
CON2. The benefits with using cloud computing services were better than we expected.
CON3. The functionalities provided by cloud computing services for team projects was better than what I expected.
CON4. Cloud computing services support our institution more than expected.
CON5. Overall, most of our expectations from using cloud computing services were confirmed.
[101][45,72]ECM
Net Benefits (NB)FormativeOur cloud computing service…
NB1. … increases the productivity of end-users.
NB2. … increases the overall productivity of the institution.
NB3. … enables individual users to make better decisions.
NB4. … helps to save IT-related costs.
NB5. … makes it easier to plan the IT costs of the institution.
NB6. … enhances our strategic flexibility.
NB7. … enhances the ability of the institution to innovate.
NB8. … enhances the mobility of the institution’s employees.
NB9. … improves the quality of the institution’s business processes.
NB10. … shifts the risks of IT failures from my instituting to the provider.
NB11. … lower the IT staff requirements within the institution to keep the system running.
NB12. … improves outcomes/outputs of my institution.
[105,120][14,77,78,152]ECM
NB13. … has brought significant benefits to the institution.[116]
Technical Integration (TE)ReflectiveTI1. The technical characteristics of the cloud computing service make it complex.
TI2. The cloud computing service depends on a sophisticated integration of technology components.
TI3. There is considerable technical complexity underlying the cloud computing service.
[16][14,78]ISD
System Quality (SQ)FormativeOur cloud computing service…
SQ1. … operates reliably and stable.
SQ2. … can be flexibly adjusted to new demands or conditions.
SQ3. … effectively integrates data from different areas of the company.
SQ4. … makes information easy to access (accessibility).
SQ5. … is easy to use.
SQ6. … provides information in a timely fashion (response time).
SQ7. … provides key features and functionalities that meet the institution requirements.
SQ8. … is secure.
SQ9. … is easy to learn.
SQ10. … meets different user requirements within the institution.
SQ11. … is easy to upgrade from an older to a newer version.
SQ12. … is easy to customize (after implementation, e.g., user interface).
[105,120][14,77,78,152]ISS
SQ13. Overall, our cloud computing system is of high quality.[116]
Information Quality (IQ)FormativeOur cloud computing service…
IQ1. … provides a complete set of information
IQ2. … produces correct information.
IQ3. … provides information which is well formatted.
IQ4. … provides me with the most recent information.
IQ5. … produces relevant information with limited unnecessary elements.
IQ6. … produces information which is easy to understand.
[105,120][14,77,78,152]
IQ7. In general, our cloud computing service provides our institution with high-quality information.[116]
System Investment (SI)ReflectiveSI1. Significant organizational resources have been invested in our cloud computing service
SI2. We have committed considerable time and money to the implementation and operation of the cloud-based system.
SI3. The financial investments that have been made in the cloud-based system are substantial.
[16][14,78]ISD
Collaboration (Col)ReflectiveCol1. Interaction of our institution with employees, industry and other institutions is easy with the continuance use of cloud computing service
Col2. Collaboration between our institution and industry raise by the continuance use of cloud computing service
Col3. The continuance uses of cloud computing service improve collaboration among institutions.
Col4. If our institution continues using cloud computing service, it can communicate with its partners (institutions and industry)
Col5. Communication with the institution’s partners (institutions and industry) is enhanced by the continuance use of cloud computing service
[195,196][42,142,143,144]TOE
Regulatory Policy (RP)ReflectiveRP1. Our institution is under pressure from some government agencies to continue using cloud computing service.
RP2. The government is providing us with incentives to continue using cloud computing service.
RP3. The government is active in setting up the facilities to enable cloud computing service.
RP4. The laws and regulations that exist nowadays are sufficient to protect the use of cloud computing service.
RP5. There is legal protection in the use of cloud computing service.
[172,252,253][145,146,147]TOE
Competitive Pressure (CP)ReflectiveCP1. Our Institution thinks that continuance use of cloud computing service has an influence on competition among other institutions
CP2. Our institution will lose students to competitors if they don’t keep using cloud computing service
CP3. Our institution is under pressure from competitors to continue using cloud computing service
CP4. Some of our competitors have been using cloud computing service
[170,171,197][72,145]TOE

References

  1. Alexander, B. Social networking in higher education. In The Tower and the Cloud; EDUCAUSE: Louisville, CO, USA, 2008; pp. 197–201. [Google Scholar]
  2. Katz, N. The Tower and the Cloud: Higher Education in the Age of Cloud Computing; EDUCAUSE: Louisville, CO, USA, 2008; Volume 9. [Google Scholar]
  3. Sultan, N. Cloud computing for education: A new dawn? Int. J. Inf. Manag. 2010, 30, 109–116. [Google Scholar] [CrossRef]
  4. Son, I.; Lee, D.; Lee, J.-N.; Chang, Y.B. Market perception on cloud computing initiatives in organizations: An extended resource-based view. Inf. Manag. 2014, 51, 653–669. [Google Scholar] [CrossRef]
  5. Salim, S.A.; Sedera, D.; Sawang, S.; Alarifi, A.H.E.; Atapattu, M. Moving from Evaluation to Trial: How do SMEs Start Adopting Cloud ERP? Australas. J. Inf. Syst. 2015, 19. [Google Scholar] [CrossRef]
  6. Mell, P.; Grance, T. The NIST Definition of Cloud Computing; U.S. Department of Commerce, National Institute of Standards and Technology: Gaithersburg, MD, USA, 2011. [Google Scholar]
  7. González-Martínez, J.A.; Bote-Lorenzo, M.L.; Gómez-Sánchez, E.; Cano-Parra, R. Cloud computing and education: A state-of-the-art survey. Comput. Educ. 2015, 80, 132–151. [Google Scholar] [CrossRef]
  8. Qasem, Y.A.M.; Abdullah, R.; Jusoh, Y.Y.; Atan, R.; Asadi, S. Cloud Computing Adoption in Higher Education Institutions: A Systematic Review. IEEE Access 2019, 7, 63722–63744. [Google Scholar] [CrossRef]
  9. Rodríguez Monroy, C.; Almarcha Arias, G.C.; Núñez Guerrero, Y. The new cloud computing paradigm: The way to IT seen as a utility. Lat. Am. Caribb. J. Eng. Educ. 2012, 6, 24–31. [Google Scholar]
  10. IDC. IDC Forecasts Worldwide Public Cloud Services Spending. 2019. Available online: https://www.idc.com/getdoc.jsp?containerId=prUS44891519 (accessed on 28 July 2020).
  11. Hsu, P.-F.; Ray, S.; Li-Hsieh, Y.-Y. Examining cloud computing adoption intention, pricing mechanism, and deployment model. Int. J. Inf. Manag. 2014, 34, 474–488. [Google Scholar] [CrossRef]
  12. Dubey, A.; Wagle, D. Delivering Software as a Service. The McKinsey Quarterly, 6 May 2007. [Google Scholar]
  13. Walther, S.; Sedera, D.; Urbach, N.; Eymann, T.; Otto, B.; Sarker, S. Should We Stay, or Should We Go? Analyzing Continuance of Cloud Enterprise Systems. J. Inf. Technol. Theory Appl. 2018, 19, 4. [Google Scholar]
  14. Qasem, Y.A.; Abdullah, R.; Jusoh, Y.Y.; Atan, R. Conceptualizing a model for Continuance Use of Cloud Computing in Higher Education Institutions. In Proceedings of the AMCIS 2020 TREOs, Salt Lake City, UT, USA, 10–14 August 2020; p. 30. [Google Scholar]
  15. Furneaux, B.; Wade, M.R. An exploration of organizational level information systems discontinuance intentions. MIS Q. 2011, 35, 573–598. [Google Scholar] [CrossRef]
  16. Long, K. Unit of Analysis. Encyclopedia of Social Science Research Methods; SAGE Publications, Inc.: Los Angeles, CA, USA, 2004. [Google Scholar]
  17. Berman, S.J.; Kesterson-Townes, L.; Marshall, A.; Srivathsa, R. How cloud computing enables process and business model innovation. Strategy Leadersh. 2012, 40, 27–35. [Google Scholar] [CrossRef]
  18. Stahl, E.; Duijvestijn, L.; Fernandes, A.; Isom, P.; Jewell, D.; Jowett, M.; Stockslager, T. Performance Implications of Cloud Computing; Red Paper: New York, NY, USA, 2012. [Google Scholar]
  19. Thorsteinsson, G.; Page, T.; Niculescu, A. Using virtual reality for developing design communication. Stud. Inform. Control 2010, 19, 93–106. [Google Scholar] [CrossRef]
  20. Pocatilu, P.; Alecu, F.; Vetrici, M. Using cloud computing for E-learning systems. In Proceedings of the 8th WSEAS International Conference on Data networks, Communications, Computers, Baltimore, MD, USA, 7–9 November 2009; pp. 54–59. [Google Scholar]
  21. Sasikala, S.; Prema, S. Massive centralized cloud computing (MCCC) exploration in higher education. Adv. Comput. Sci. Technol. 2011, 3, 111. [Google Scholar]
  22. García-Peñalvo, F.J.; Johnson, M.; Alves, G.R.; Minović, M.; Conde-González, M.Á. Informal learning recognition through a cloud ecosystem. Future Gener. Comput. Syst. 2014, 32, 282–294. [Google Scholar] [CrossRef] [Green Version]
  23. Nguyen, T.D.; Nguyen, T.M.; Pham, Q.T.; Misra, S. Acceptance and Use of E-Learning Based on Cloud Computing: The Role of Consumer Innovativeness. In Computational Science and Its Applications—Iccsa 2014; Pt, V.; Murgante, B., Misra, S., Rocha, A., Torre, C., Rocha, J.G., Falcao, M.I., Taniar, D., Apduhan, B.O., Gervasi, O., Eds.; Springer: Cham, Switzerland, 2014; pp. 159–174. [Google Scholar]
  24. Pinheiro, P.; Aparicio, M.; Costa, C. Adoption of cloud computing systems. In Proceedings of the International Conference on Information Systems and Design of Communication—ISDOC ’14, Lisbon, Portugal, 16 May 2014; ACM: Lisbon, Portugal, 2014; pp. 127–131. [Google Scholar]
  25. Behrend, T.S.; Wiebe, E.N.; London, J.E.; Johnson, E.C. Cloud computing adoption and usage in community colleges. Behav. Inf. Technol. 2011, 30, 231–240. [Google Scholar] [CrossRef]
  26. Almazroi, A.A.; Shen, H.F.; Teoh, K.K.; Babar, M.A. Cloud for e-Learning: Determinants of its Adoption by University Students in a Developing Country. In Proceedings of the 2016 IEEE 13th International Conference on E-Business Engineering (Icebe), Macau, China, 4–6 November 2016; pp. 71–78. [Google Scholar]
  27. Meske, C.; Stieglitz, S.; Vogl, R.; Rudolph, D.; Oksuz, A. Cloud Storage Services in Higher Educatio-Results of a Preliminary Study in the Context of the Sync&Share-Project in Germany. In Learning and Collaboration Technologies: Designing and Developing Novel Learning Experiences; Pt, I.; Zaphiris, P., Ioannou, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; pp. 161–171. [Google Scholar]
  28. Khatib, M.M.E.; Opulencia, M.J.C. The Effects of Cloud Computing (IaaS) on E- Libraries in United Arab Emirates. Procedia Econ. Financ. 2015, 23, 1354–1357. [Google Scholar] [CrossRef]
  29. Arpaci, I.; Kilicer, K.; Bardakci, S. Effects of security and privacy concerns on educational use of cloud services. Comput. Hum. Behav. 2015, 45, 93–98. [Google Scholar] [CrossRef]
  30. Park, S.C.; Ryoo, S.Y. An empirical investigation of end-users’ switching toward cloud computing: A two factor theory perspective. Comput. Hum. Behav. 2013, 29, 160–170. [Google Scholar] [CrossRef]
  31. Riaz, S.; Muhammad, J. An Evaluation of Public Cloud Adoption for Higher Education: A case study from Pakistan. In Proceedings of the 2015 International Symposium on Mathematical Sciences and Computing Research (Ismsc), Ipoh, MX, USA, 19–20 May 2015; pp. 208–213. [Google Scholar]
  32. Militaru, G.; Purcărea, A.A.; Negoiţă, O.D.; Niculescu, A. Examining Cloud Computing Adoption Intention in Higher Education: Exploratory Study. In Exploring Services Science; Borangiu, T., Dragoicea, M., Novoa, H., Eds.; 2016; pp. 732–741. [Google Scholar]
  33. Wu, W.W.; Lan, L.W.; Lee, Y.T. Factors hindering acceptance of using cloud services in university: A case study. Electron. Libr. 2013, 31, 84–98. [Google Scholar] [CrossRef] [Green Version]
  34. Gurung, R.K.; Alsadoon, A.; Prasad, P.W.C.; Elchouemi, A. Impacts of Mobile Cloud Learning (MCL) on Blended Flexible Learning (BFL). In Proceedings of the 2016 International Conference on Information and Digital Technologies (IDT), Rzeszow, Poland, 5–7 July 2016; pp. 108–114. [Google Scholar]
  35. Bhatiasevi, V.; Naglis, M. Investigating the structural relationship for the determinants of cloud computing adoption in education. Educ. Inf. Technol. 2015, 21, 1197–1223. [Google Scholar] [CrossRef]
  36. Yeh, C.-H.; Hsu, C.-C. The Learning Effect of Students’ Cognitive Styles in Using Cloud Technology. In Advances in Web-Based Learning—ICWL 2013 Workshops; Chiu, D.K.W., Wang, M., Popescu, E., Li, Q., Lau, R., Shih, T.K., Yang, C.-S., Sampson, D.G., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; pp. 155–163. [Google Scholar]
  37. Stantchev, V.; Colomo-Palacios, R.; Soto-Acosta, P.; Misra, S. Learning management systems and cloud file hosting services: A study on students’ acceptance. Comput. Hum. Behav. 2014, 31, 612–619. [Google Scholar] [CrossRef]
  38. Yuvaraj, M. Perception of cloud computing in developing countries. Libr. Rev. 2016, 65, 33–51. [Google Scholar] [CrossRef]
  39. Sharma, S.K.; Al-Badi, A.H.; Govindaluri, S.M.; Al-Kharusi, M.H. Predicting motivators of cloud computing adoption: A developing country perspective. Comput. Hum. Behav. 2016, 62, 61–69. [Google Scholar] [CrossRef]
  40. Kankaew, V.; Wannapiroon, P. System Analysis of Virtual Team in Cloud Computing to Enhance Teamwork Skills of Undergraduate Students. Procedia Soc. Behav. Sci. 2015, 174, 4096–4102. [Google Scholar] [CrossRef] [Green Version]
  41. Yadegaridehkordi, E.; Iahad, N.A.; Ahmad, N. Task-Technology Fit and User Adoption of Cloud-based Collaborative Learning Technologies. In Proceedings of the 2014 International Conference on Computer and Information Sciences (Iccoins), Kuala Lumpur, MX, USA, 3–5 June 2014. [Google Scholar]
  42. Arpaci, I. Understanding and predicting students’ intention to use mobile cloud storage services. Comput. Hum. Behav. 2016, 58, 150–157. [Google Scholar] [CrossRef]
  43. Shiau, W.-L.; Chau, P.Y.K. Understanding behavioral intention to use a cloud computing classroom: A multiple model comparison approach. Inf. Manag. 2016, 53, 355–365. [Google Scholar] [CrossRef]
  44. Tan, X.; Kim, Y. User acceptance of SaaS-based collaboration tools: A case of Google Docs. J. Enterp. Inf. Manag. 2015, 28, 423–442. [Google Scholar] [CrossRef]
  45. Atchariyachanvanich, K.; Siripujaka, N.; Jaiwong, N. What Makes University Students Use Cloud-based E-Learning?: Case Study of KMITL Students. In Proceedings of the 2014 International Conference on Information Society (I-Society 2014), London, UK, 10–12 November 2014; pp. 112–116. [Google Scholar]
  46. Vaquero, L.M. EduCloud: PaaS versus IaaS Cloud Usage for an Advanced Computer Science Course. IEEE Trans. Educ. 2011, 54, 590–598. [Google Scholar] [CrossRef]
  47. Ashtari, S.; Eydgahi, A. Student Perceptions of Cloud Computing Effectiveness in Higher Education. In Proceedings of the 2015 IEEE 18th International Conference on Computational Science and Engineering (CSE), Porto, Portugal, 21–23 October 2015; pp. 184–191. [Google Scholar]
  48. Qasem, Y.A.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Cloud-Based Education As a Service (CEAAS) System Requirements Specification Model of Higher Education Institutions in Industrial Revolution 4.0. Int. J. Recent Technol. Eng. 2019, 8. [Google Scholar] [CrossRef]
  49. Huang, Y.-M. The factors that predispose students to continuously use cloud services: Social and technological perspectives. Comput. Educ. 2016, 97, 86–96. [Google Scholar] [CrossRef]
  50. Tashkandi, A.N.; Al-Jabri, I.M. Cloud computing adoption by higher education institutions in Saudi Arabia: An exploratory study. Clust. Comput. 2015, 18, 1527–1537. [Google Scholar] [CrossRef]
  51. Tashkandi, A.; Al-Jabri, I. Cloud Computing Adoption by Higher Education Institutions in Saudi Arabia: Analysis Based on TOE. In Proceedings of the 2015 International Conference on Cloud Computing, ICCC, Riyadh, Saudi Arabia, 26–29 April 2015. [Google Scholar]
  52. Dahiru, A.A.; Bass, J.M.; Allison, I.K. Cloud computing adoption in sub-Saharan Africa: An analysis using institutions and capabilities. In Proceedings of the International Conference on Information Society, i-Society 2014, London, UK, 10–12 November 2014; pp. 98–103. [Google Scholar]
  53. Shakeabubakor, A.A.; Sundararajan, E.; Hamdan, A.R. Cloud Computing Services and Applications to Improve Productivity of University Researchers. Int. J. Inf. Electron. Eng. 2015, 5, 153. [Google Scholar] [CrossRef] [Green Version]
  54. Md Kassim, S.S.; Salleh, M.; Zainal, A. Cloud Computing: A General User’s Perception and Security Awareness in Malaysian Polytechnic. In Pattern Analysis, Intelligent Security and the Internet of Things; Abraham, A., Muda, A.K., Choo, Y.-H., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 131–140. [Google Scholar]
  55. Sabi, H.M.; Uzoka, F.-M.E.; Langmia, K.; Njeh, F.N. Conceptualizing a model for adoption of cloud computing in education. Int. J. Inf. Manag. 2016, 36, 183–191. [Google Scholar] [CrossRef]
  56. Sabi, H.M.; Uzoka, F.-M.E.; Langmia, K.; Njeh, F.N.; Tsuma, C.K. A cross-country model of contextual factors impacting cloud computing adoption at universities in sub-Saharan Africa. Inf. Syst. Front. 2017, 20, 1381–1404. [Google Scholar] [CrossRef]
  57. Yuvaraj, M. Determining factors for the adoption of cloud computing in developing countries. Bottom Line 2016, 29, 259–272. [Google Scholar] [CrossRef]
  58. Surya, G.S.F.; Surendro, K. E-Readiness Framework for Cloud Computing Adoption in Higher Education. In Proceedings of the 2014 International Conference of Advanced Informatics: Concept, Theory and Application (ICAICTA), Bandung, Indonesia, 20–21 August 2014; pp. 278–282. [Google Scholar]
  59. Alharthi, A.; Alassafi, M.O.; Walters, R.J.; Wills, G.B. An exploratory study for investigating the critical success factors for cloud migration in the Saudi Arabian higher education context. Telemat. Inform. 2017, 34, 664–678. [Google Scholar] [CrossRef] [Green Version]
  60. Mokhtar, S.A.; Al-Sharafi, A.; Ali, S.H.S.; Aborujilah, A. Organizational Factors in the Adoption of Cloud Computing in E-learning. In Proceedings of the 3rd International Conference on Advanced Computer Science Applications and Technologies Acsat, Amman, Jordan, 29–30 December 2014; pp. 188–191. [Google Scholar]
  61. Lal, P. Organizational learning management systems: Time to move learning to the cloud! Dev. Learn. Organ. Int. J. 2015, 29, 13–15. [Google Scholar] [CrossRef]
  62. Yuvaraj, M. Problems and prospects of implementing cloud computing in university libraries. Libr. Rev. 2015, 64, 567–582. [Google Scholar] [CrossRef]
  63. Koch, F.; Assunção, M.D.; Cardonha, C.; Netto, M.A.S. Optimising resource costs of cloud computing for education. Future Gener. Comput. Syst. 2016, 55, 473–479. [Google Scholar] [CrossRef] [Green Version]
  64. Qasem, Y.A.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Towards Developing A Cloud-Based Education As A Service (CEAAS) Model For Cloud Computing Adoption in Higher Education Institutions. Complexity 2018, 6, 7. [Google Scholar] [CrossRef]
  65. Qasem, Y.A.M.; Abdullah, R.; Yah, Y.; Atan, R.; Al-Sharafi, M.A.; Al-Emran, M. Towards the Development of a Comprehensive Theoretical Model for Examining the Cloud Computing Adoption at the Organizational Level. In Recent Advances in Intelligent Systems and Smart Applications; Al-Emran, M., Shaalan, K., Hassanien, A.E., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 63–74. [Google Scholar]
  66. Jia, Q.; Guo, Y.; Barnes, S.J. Enterprise 2.0 post-adoption: Extending the information system continuance model based on the technology-Organization-environment framework. Comput. Hum. Behav. 2017, 67, 95–105. [Google Scholar] [CrossRef] [Green Version]
  67. Tripathi, S. Understanding the determinants affecting the continuance intention to use cloud computing. J. Int. Technol. Inf. Manag. 2017, 26, 124–152. [Google Scholar]
  68. Obal, M. What drives post-adoption usage? Investigating the negative and positive antecedents of disruptive technology continuous adoption intentions. Ind. Mark. Manag. 2017, 63, 42–52. [Google Scholar] [CrossRef]
  69. Ratten, V. Continuance use intention of cloud computing: Innovativeness and creativity perspectives. J. Bus. Res. 2016, 69, 1737–1740. [Google Scholar] [CrossRef]
  70. Flack, C.K. IS Success Model for Evaluating Cloud Computing for Small Business Benefit: A Quantitative Study. Ph.D. Thesis, Kennesaw State University, Kennesaw, GA, USA, 2016. [Google Scholar]
  71. Walther, S.; Sarker, S.; Urbach, N.; Sedera, D.; Eymann, T.; Otto, B. Exploring organizational level continuance of cloud-based enterprise systems. In Proceedings of the ECIS 2015 Completed Research Papers, Münster, Germany, 26–29 May 2015. [Google Scholar]
  72. Ghobakhloo, M.; Tang, S.H. Information system success among manufacturing SMEs: Case of developing countries. Inf. Technol. Dev. 2015, 21, 573–600. [Google Scholar] [CrossRef]
  73. Schlagwein, D.; Thorogood, A. Married for life? A cloud computing client-provider relationship continuance model. In Proceedings of the European Conference on Information Systems (ECIS) 2014, Tel Aviv, Israel, 9–11 June 2014. [Google Scholar]
  74. Hadji, B.; Degoulet, P. Information system end-user satisfaction and continuance intention: A unified modeling approach. J. Biomed. Inf. 2016, 61, 185–193. [Google Scholar] [CrossRef]
  75. Esteves, J.; Bohórquez, V.W. An Updated ERP Systems Annotated Bibliography: 2001–2005; Instituto de Empresa Business School Working Paper No. WP; Instituto de Empresa Business School: Madrid, Spain, 2007; pp. 4–7. [Google Scholar]
  76. Gable, G.G.; Sedera, D.; Chan, T. Re-conceptualizing information system success: The IS-impact measurement model. J. Assoc. Inf. Syst. 2008, 9, 18. [Google Scholar] [CrossRef]
  77. Sedera, D.; Gable, G.G. Knowledge management competence for enterprise system success. J. Strateg. Inf. Syst. 2010, 19, 296–306. [Google Scholar] [CrossRef] [Green Version]
  78. Walther, S.; Plank, A.; Eymann, T.; Singh, N.; Phadke, G. Success factors and value propositions of software as a service providers—A literature review and classification. In Proceedings of the 2012 AMCIS: 18th Americas Conference on Information Systems, Seattle, WA, USA, 9–12 August 2012. [Google Scholar]
  79. Ashtari, S.; Eydgahi, A. Student perceptions of cloud applications effectiveness in higher education. J. Comput. Sci. 2017, 23, 173–180. [Google Scholar] [CrossRef]
  80. Ding, Y. Looking forward: The role of hope in information system continuance. Comput. Hum. Behav. 2019, 91, 127–137. [Google Scholar] [CrossRef]
  81. Rogers, E. Diffusion of Innovation; Macmillan Press Ltd.: London, UK, 1962. [Google Scholar]
  82. Ettlie, J.E.J.P.R. Adequacy of stage models for decisions on adoption of innovation. Psychol. Rep. 1980, 46, 991–995. [Google Scholar] [CrossRef]
  83. Fichman, R.G.; Kemerer, C.F.J.M.s. The assimilation of software process innovations: An organizational learning perspective. Manag. Sci. 1997, 43, 1345–1363. [Google Scholar] [CrossRef] [Green Version]
  84. Salim, S.A.; Sedera, D.; Sawang, S.; Alarifi, A. Technology adoption as a multi-stage process. In Proceedings of the 25th Australasian Conference on Information Systems (ACIS), Auckland, New Zealand, 8–10 December 2014. [Google Scholar]
  85. Fichman, R.G.; Kemerer, C.F. Adoption of software engineering process innovations: The case of object orientation. Sloan Manag. Rev. 1993, 34, 7–23. [Google Scholar]
  86. Choudhury, V.; Karahanna, E. The relative advantage of electronic channels: A multidimensional view. MIS Q. 2008, 32, 179. [Google Scholar] [CrossRef] [Green Version]
  87. Karahanna, E.; Straub, D.W.; Chervany, N.L. Information technology adoption across time: A cross-sectional comparison of pre-adoption and post-adoption beliefs. MIS Q. 1999, 23, 183. [Google Scholar] [CrossRef]
  88. Pavlou, P.A.; Fygenson, M. Understanding and predicting electronic commerce adoption: An extension of the theory of planned behavior. MIS Q. 2006, 30, 115. [Google Scholar] [CrossRef]
  89. Shoham, A. Selecting and evaluating trade shows. Ind. Mark. Manag. 1992, 21, 335–341. [Google Scholar] [CrossRef]
  90. Mintzberg, H.; Raisinghani, D.; Theoret, A. The Structure of “Unstructured” Decision Processes. Adm. Sci. Q. 1976, 21, 246–275. [Google Scholar] [CrossRef] [Green Version]
  91. Pierce, J.L.; Delbecq, A.L. Organization structure, individual attitudes and innovation. Acad. Manag. Rev. 1977, 2, 27–37. [Google Scholar] [CrossRef]
  92. Zmud, R.W. Diffusion of modern software practices: Influence of centralization and formalization. Manag. Sci. 1982, 28, 1421–1431. [Google Scholar] [CrossRef]
  93. Aguirre-Urreta, M.I.; Marakas, G.M. Exploring choice as an antecedent to behavior: Incorporating alternatives into the technology acceptance process. J. Organ. End User Comput. 2012, 24, 82–107. [Google Scholar] [CrossRef]
  94. Schwarz, A.; Chin, W.W.; Hirschheim, R.; Schwarz, C. Toward a process-based view of information technology acceptance. J. Inf. Technol. 2014, 29, 73–96. [Google Scholar] [CrossRef]
  95. Maier, C.; Laumer, S.; Weinert, C.; Weitzel, T. The effects of technostress and switching stress on discontinued use of social networking services: A study of Facebook use. Inf. Syst. J. 2015, 25, 275–308. [Google Scholar] [CrossRef]
  96. Damanpour, F.; Schneider, M. Phases of the adoption of innovation in organizations: Effects of environment, organization and top managers 1. Br. J. Manag. 2006, 17, 215–236. [Google Scholar] [CrossRef]
  97. Jeyaraj, A.; Rottman, J.W.; Lacity, M.C. A review of the predictors, linkages, and biases in IT innovation adoption research. J. Inf. Technol. 2006, 21, 1–23. [Google Scholar] [CrossRef]
  98. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319. [Google Scholar] [CrossRef] [Green Version]
  99. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef] [Green Version]
  100. Oliver, R.L. A cognitive model of the antecedents and consequences of satisfaction decisions. J. Mark. Res. 1980, 17, 460–469. [Google Scholar] [CrossRef]
  101. Bhattacherjee, A. Understanding information systems continuance: An expectation-confirmation model. MIS Q. 2001, 25, 351. [Google Scholar] [CrossRef]
  102. Tornatzky, L.G.; Fleischer, M.; Chakrabarti, A.K. Processes of Technological Innovation; Lexington Books: Lanham, MD, USA, 1990. [Google Scholar]
  103. Rogers, E.M. Diffusion of Innovations; The Free Press: New York, NY, USA, 1995; p. 12. [Google Scholar]
  104. Teo, H.-H.; Wei, K.K.; Benbasat, I. Predicting intention to adopt interorganizational linkages: An institutional perspective. MIS Q. 2003, 27, 19. [Google Scholar] [CrossRef] [Green Version]
  105. DeLone, W.H.; McLean, E.R. Information systems success: The quest for the dependent variable. Inf. Syst. Res. 1992, 3, 60–95. [Google Scholar] [CrossRef]
  106. Eden, R.; Sedera, D.; Tan, F. Sustaining the Momentum: Archival Analysis of Enterprise Resource Planning Systems (2006–2012). Commun. Assoc. Inf. Syst. 2014, 35, 3. [Google Scholar] [CrossRef]
  107. Chou, S.-W.; Chen, P.-Y. The influence of individual differences on continuance intentions of enterprise resource planning (ERP). Int. J. Hum. Comput. Stud. 2009, 67, 484–496. [Google Scholar] [CrossRef]
  108. Lin, W.-S. Perceived fit and satisfaction on web learning performance: IS continuance intention and task-technology fit perspectives. Int. J. Hum. Comput. Stud. 2012, 70, 498–507. [Google Scholar] [CrossRef]
  109. Karahanna, E.; Straub, D. The psychological origins of perceived usefulness and ease-of-use. Inf. Manag. 1999, 35, 237–250. [Google Scholar] [CrossRef]
  110. Roca, J.C.; Chiu, C.-M.; Martínez, F.J. Understanding e-learning continuance intention: An extension of the Technology Acceptance Model. Int. J. Hum. Comput. Stud. 2006, 64, 683–696. [Google Scholar] [CrossRef] [Green Version]
  111. Dai, H.M.; Teo, T.; Rappa, N.A.; Huang, F. Explaining Chinese university students’ continuance learning intention in the MOOC setting: A modified expectation confirmation model perspective. Comput. Educ. 2020, 150, 103850. [Google Scholar] [CrossRef]
  112. Ouyang, Y.; Tang, C.; Rong, W.; Zhang, L.; Yin, C.; Xiong, Z. Task-technology fit aware expectation-confirmation model towards understanding of MOOCs continued usage intention. In Proceedings of the 50th Hawaii International Conference on System Sciences, Hilton Waikoloa Village, HI, USA, 4–7 January 2017. [Google Scholar]
  113. Joo, Y.J.; So, H.-J.; Kim, N.H. Examination of relationships among students’ self-determination, technology acceptance, satisfaction, and continuance intention to use K-MOOCs. Comput. Educ. 2018, 122, 260–272. [Google Scholar] [CrossRef]
  114. Thong, J.Y.; Hong, S.-J.; Tam, K.Y. The effects of post-adoption beliefs on the expectation-confirmation model for information technology continuance. Int. J. Hum. Comput. Stud. 2006, 64, 799–810. [Google Scholar] [CrossRef]
  115. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  116. Wixom, B.H.; Todd, P.A. A theoretical integration of user satisfaction and technology acceptance. Inf. Syst. Res. 2005, 16, 85–102. [Google Scholar] [CrossRef]
  117. Lokuge, S.; Sedera, D. Deriving information systems innovation execution mechanisms. In Proceedings of the 25th Australasian Conference on Information Systems (ACIS), Auckland, New Zealand, 8–10 December 2014. [Google Scholar]
  118. Lokuge, S.; Sedera, D. Enterprise systems lifecycle-wide innovation readiness. In Proceedings of the PACIS 2014 Proceedings, Chengdu, China, 24–28 June 2014; pp. 1–14. [Google Scholar]
  119. Melville, N.; Kraemer, K.; Gurbaxani, V. Information technology and organizational performance: An integrative model of IT business value. MIS Q. 2004, 28, 283–322. [Google Scholar] [CrossRef] [Green Version]
  120. Delone, W.H.; McLean, E.R. The DeLone and McLean model of information systems success: A ten-year update. J. Manag. Inf. Syst. 2003, 19, 9–30. [Google Scholar]
  121. Urbach, N.; Smolnik, S.; Riempp, G. The state of research on information systems success. Bus. Inf. Syst. Eng. 2009, 1, 315–325. [Google Scholar] [CrossRef] [Green Version]
  122. Wang, Y.S. Assessing e-commerce systems success: A respecification and validation of the DeLone and McLean model of IS success. Inf. Syst. J. 2008, 18, 529–557. [Google Scholar] [CrossRef]
  123. Urbach, N.; Smolnik, S.; Riempp, G. An empirical investigation of employee portal success. J. Strateg. Inf. Syst. 2010, 19, 184–206. [Google Scholar] [CrossRef]
  124. Barki, H.; Huff, S.L. Change, attitude to change, and decision support system success. Inf. Manag. 1985, 9, 261–268. [Google Scholar] [CrossRef]
  125. Gelderman, M. The relation between user satisfaction, usage of information systems and performance. Inf. Manag. 1998, 34, 11–18. [Google Scholar] [CrossRef]
  126. Seddon, P.B. A respecification and extension of the DeLone and McLean model of IS success. Inf. Syst. Res. 1997, 8, 240–253. [Google Scholar] [CrossRef]
  127. Yuthas, K.; Young, S.T. Material matters: Assessing the effectiveness of materials management IS. Inf. Manag. 1998, 33, 115–124. [Google Scholar] [CrossRef]
  128. Burton-Jones, A.; Gallivan, M.J. Toward a deeper understanding of system usage in organizations: A multilevel perspective. MIS Q. 2007, 31, 657. [Google Scholar] [CrossRef] [Green Version]
  129. Sedera, D.; Gable, G.; Chan, T. ERP success: Does organisation Size Matter? In Proceedings of the PACIS 2003 Proceedings, Adelaide, Australia, 10–13 July 2003; p. 74. [Google Scholar]
  130. Sedera, D.; Gable, G.; Chan, T. Knowledge management for ERP success. In Proceedings of the PACIS 2003 Proceedings, Adelaide, Australia, 10–13 July 2003; p. 97. [Google Scholar]
  131. Abolfazli, S.; Sanaei, Z.; Tabassi, A.; Rosen, S.; Gani, A.; Khan, S.U. Cloud Adoption in Malaysia: Trends, Opportunities, and Challenges. IEEE Cloud Comput. 2015, 2, 60–68. [Google Scholar] [CrossRef]
  132. Arkes, H.R.; Blumer, C. The psychology of sunk cost. Organ. Behav. Hum. Decis. Process. 1985, 35, 124–140. [Google Scholar] [CrossRef]
  133. Ahtiala, P. The optimal pricing of computer software and other products with high switching costs. Int. Rev. Econ. Financ. 2006, 15, 202–211. [Google Scholar] [CrossRef] [Green Version]
  134. Benlian, A.; Vetter, J.; Hess, T. The role of sunk cost in consecutive IT outsourcing decisions. Z. Fur Betr. 2012, 82, 181. [Google Scholar]
  135. Armbrust, M.; Fox, A.; Griffith, R.; Joseph, A.D.; Katz, R.; Konwinski, A.; Lee, G.; Patterson, D.; Rabkin, A.; Stoica, I. A view of cloud computing. Commun. ACM 2010, 53, 50–58. [Google Scholar] [CrossRef] [Green Version]
  136. Wei, Y.; Blake, M.B. Service-oriented computing and cloud computing: Challenges and opportunities. IEEE Internet Comput. 2010, 14, 72–75. [Google Scholar] [CrossRef]
  137. Bughin, J.; Chui, M.; Manyika, J. Clouds, big data, and smart assets: Ten tech-enabled business trends to watch. McKinsey Q. 2010, 56, 75–86. [Google Scholar]
  138. Lin, H.-F. Understanding the determinants of electronic supply chain management system adoption: Using the Technology–Organization–Environment framework. Technol. Forecast. Soc. Chang. 2014, 86, 80–92. [Google Scholar] [CrossRef]
  139. Oliveira, T.; Martins, M.F. Literature review of information technology adoption models at firm level. Electron. J. Inf. Syst. Eval. 2011, 14, 110–121. [Google Scholar]
  140. Chau, P.Y.; Tam, K.Y. Factors affecting the adoption of open systems: An exploratory study. MIS Q. 1997, 21, 1–24. [Google Scholar] [CrossRef]
  141. Galliers, R.D. Organizational Dynamics of Technology-Based Innovation; Springer: Boston, MA, USA, 2007; pp. 15–18. [Google Scholar]
  142. Yadegaridehkordi, E.; Iahad, N.A.; Ahmad, N. Task-Technology Fit Assessment of Cloud-Based Collaborative Learning Technologes: Remote Work and Collaboration: Breakthroughs in Research and Practice. Int. J. Inf. Systems Serv. Sect. 2017, 371–388. [Google Scholar] [CrossRef]
  143. Gupta, P.; Seetharaman, A.; Raj, J.R.J.I.J.o.I.M. The usage and adoption of cloud computing by small and medium businesses. Int. J. Inf. Manag. 2013, 33, 861–874. [Google Scholar] [CrossRef]
  144. Chong, A.Y.-L.; Lin, B.; Ooi, K.-B.; Raman, M. Factors affecting the adoption level of c-commerce: An empirical study. J. Comput. Inf. Syst. 2009, 50, 13–22. [Google Scholar]
  145. Oliveira, T.; Thomas, M.; Espadanal, M. Assessing the determinants of cloud computing adoption: An analysis of the manufacturing and services sectors. Inf. Manag. 2014, 51, 497–510. [Google Scholar] [CrossRef]
  146. Senyo, P.K.; Effah, J.; Addae, E. Preliminary insight into cloud computing adoption in a developing country. J. Enterp. Inf. Manag. 2016, 29, 505–524. [Google Scholar] [CrossRef]
  147. Klug, W.; Bai, X. The determinants of cloud computing adoption by colleges and universities. Int. J. Bus. Res. Inf. Technol. 2015, 2, 14–30. [Google Scholar]
  148. Cornu, B. Digital Natives: How Do They Learn? How to Teach Them; UNESCO Institute for Information Technology in Education: Moscow, Russia, 2011; Volume 52, pp. 2–11. [Google Scholar]
  149. Oblinger, D.; Oblinger, J.L.; Lippincott, J.K. Educating the Net Generation; c2005. 1 v.(various pagings): Illustrations; Educause: Boulder, CO, USA, 2005. [Google Scholar]
  150. Wymer, S.A.; Regan, E.A. Factors influencing e-commerce adoption and use by small and medium businesses. Electron. Mark. 2005, 15, 438–453. [Google Scholar] [CrossRef]
  151. Qasem, Y.A.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Mapping and Analyzing Process of Cloud-based Education as a Service (CEaaS) Model for Cloud Computing Adoption in Higher Education Institutions. In Proceedings of the 2018 Fourth International Conference on Information Retrieval and Knowledge Management (CAMP), Kota Kinabalu, MX, USA, 26–28 March 2018; pp. 1–8. [Google Scholar]
  152. Walther, S.; Sedera, D.; Sarker, S.; Eymann, T. Evaluating Operational Cloud Enterprise System Success: An Organizational Perspective. In Proceedings of the ECIS, Utrecht, The Netherlands, 6–8 June 2013; p. 16. [Google Scholar]
  153. Wang, M.W.; Lee, O.-K.; Lim, K.H. Knowledge management systems diffusion in Chinese enterprises: A multi-stage approach with the technology-organization-environment framework. In Proceedings of the PACIS 2007 Proceedings, Auckland, New Zealand, 4–6 July 2007; p. 70. [Google Scholar]
  154. Liao, C.; Palvia, P.; Chen, J.-L. Information technology adoption behavior life cycle: Toward a Technology Continuance Theory (TCT). Int. J. Inf. Manag. 2009, 29, 309–320. [Google Scholar] [CrossRef]
  155. Li, Y.; Crossler, R.E.; Compeau, D. Regulatory Focus in the Context of Wearable Continuance. In Proceedings of the AMCIS 2019 Conference Site, Cancún, México, 15–17 August 2019. [Google Scholar]
  156. Rousseau, D.M.J.R.i.o.b. Issues of level in organizational research: Multi-level and cross-level perspectives. Res. Organ. Behav. 1985, 7, 1–37. [Google Scholar]
  157. Walther, S. An Investigation of Organizational Level Continuance of Cloud-Based Enterprise Systems. Ph.D. Thesis, University of Bayreuth, Bayreuth, Germany, 2014. [Google Scholar]
  158. Petter, S.; DeLone, W.; McLean, E. Measuring information systems success: Models, dimensions, measures, and interrelationships. Eur. J. Inf. Syst. 2008, 17, 236–263. [Google Scholar] [CrossRef]
  159. Robey, D.; Zeller, R.L.J.I. Factors affecting the success and failure of an information system for product quality. Interfaces 1978, 8, 70–75. [Google Scholar] [CrossRef]
  160. Aldholay, A.; Isaac, O.; Abdullah, Z.; Abdulsalam, R.; Al-Shibami, A.H. An extension of Delone and McLean IS success model with self-efficacy: Online learning usage in Yemen. Int. J. Inf. Learn. Technol. 2018, 35, 285–304. [Google Scholar] [CrossRef]
  161. Xu, J.D.; Benbasat, I.; Cenfetelli, R.T.J.M.Q. Integrating service quality with system and information quality: An empirical test in the e-service context. MIS Q. 2013, 37, 777–794. [Google Scholar] [CrossRef]
  162. Spears, J.L.; Barki, H.J.M.q. User participation in information systems security risk management. MIS Q. 2010, 34, 503. [Google Scholar] [CrossRef] [Green Version]
  163. Lee, S.; Shin, B.; Lee, H.G. Understanding post-adoption usage of mobile data services: The role of supplier-side variables. J. Assoc. Inf. Syst. 2009, 10, 860–888. [Google Scholar] [CrossRef] [Green Version]
  164. Alshare, K.A.; Freeze, R.D.; Lane, P.L.; Wen, H.J. The impacts of system and human factors on online learning systems use and learner satisfaction. Decis. Sci. J. Innov. Educ. 2011, 9, 437–461. [Google Scholar] [CrossRef]
  165. Benlian, A.; Koufaris, M.; Hess, T. Service quality in software-as-a-service: Developing the SaaS-Qual measure and examining its role in usage continuance. J. Manag. Inf. Syst. 2011, 28, 85–126. [Google Scholar] [CrossRef]
  166. Oblinger, D. Boomers gen-xers millennials. EDUCAUSE Rev. 2003, 500, 37–47. [Google Scholar]
  167. Monaco, M.; Martin, M. The millennial student: A new generation of learners. Athl. Train. Educ. J. 2007, 2, 42–46. [Google Scholar] [CrossRef]
  168. White, B.J.; Brown, J.A.E.; Deale, C.S.; Hardin, A.T. Collaboration using cloud computing and traditional systems. Issues Inf. Syst. 2009, 10, 27–32. [Google Scholar]
  169. Nkhoma, M.Z.; Dang, D.P.; De Souza-Daw, A. Contributing factors of cloud computing adoption: A technology-organisation-environment framework approach. In Proceedings of the European Conference on Information Management & Evaluation, Melbourne, Australia, 2–4 December 2013. [Google Scholar]
  170. Zhu, K.; Dong, S.; Xu, S.X.; Kraemer, K.L. Innovation diffusion in global contexts: Determinants of post-adoption digital transformation of European companies. Eur. J. Inf. Syst. 2006, 15, 601–616. [Google Scholar] [CrossRef]
  171. Zhu, K.; Kraemer, K.L.; Xu, S. The process of innovation assimilation by firms in different countries: A technology diffusion perspective on e-business. Manag. Sci. 2006, 52, 1557–1576. [Google Scholar] [CrossRef] [Green Version]
  172. Shah Alam, S.; Ali, M.Y.; Jaini, M.M.F. An empirical study of factors affecting electronic commerce adoption among SMEs in Malaysia. J. Bus. Econ. Manag. 2011, 12, 375–399. [Google Scholar] [CrossRef] [Green Version]
  173. Ifinedo, P. Internet/e-business technologies acceptance in Canada’s SMEs: An exploratory investigation. Internet Res. 2011, 21, 255–281. [Google Scholar] [CrossRef]
  174. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches; Sage Publications: Los Angeles, CA, USA, 2017. [Google Scholar]
  175. Moore, G.C.; Benbasat, I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inf. Syst. Res. 1991, 2, 192–222. [Google Scholar] [CrossRef]
  176. Wu, K.; Vassileva, J.; Zhao, Y. Understanding users’ intention to switch personal cloud storage services: Evidence from the Chinese market. Comput. Hum. Behav. 2017, 68, 300–314. [Google Scholar] [CrossRef]
  177. Kelley, D.L. Measurement Made Accessible: A Research Approach Using Qualitative, Quantitative and Quality Improvement Methods; Sage Publications: Los Angeles, CA, USA, 1999. [Google Scholar]
  178. McKenzie, J.F.; Wood, M.L.; Kotecki, J.E.; Clark, J.K.; Brey, R.A. Establishing content validity: Using qualitative and quantitative steps. Am. J. Health Behav. 1999, 23, 311–318. [Google Scholar] [CrossRef]
  179. Zikmund, W.G.; Babin, B.J.; Carr, J.C.; Griffin, M. Business Research Methods, 9th ed.; South-Western Cengage Learning: Nelson, BC, Canada, 2013. [Google Scholar]
  180. Sekaran, U.; Bougie, R. Research Methods for Business: A Skill Building Approach; John Wiley & Sons: New York, NY, USA, 2016. [Google Scholar]
  181. Mathieson, K.; Peacock, E.; Chin, W.W. Extending the technology acceptance model. ACM SIGMIS Database Database Adv. Inf. Syst. 2001, 32, 86. [Google Scholar] [CrossRef]
  182. Diamantopoulos, A.; Winklhofer, H.M. Index construction with formative indicators: An alternative to scale development. J. Mark. Res. 2001, 38, 269–277. [Google Scholar] [CrossRef]
  183. MacKenzie, S.B.; Podsakoff, P.M.; Podsakoff, N.P. Construct measurement and validation procedures in MIS and behavioral research: Integrating new and existing techniques. MIS Q. 2011, 35, 293–334. [Google Scholar] [CrossRef]
  184. Petter, S.; Straub, D.; Rai, A. Specifying formative constructs in information systems research. MIS Q. 2007, 31, 623. [Google Scholar] [CrossRef] [Green Version]
  185. Haladyna, T.M. Developing and Validating Multiple-Choice Test Items; Routledge: London, UK, 2004. [Google Scholar]
  186. DeVon, H.A.; Block, M.E.; Moyle-Wright, P.; Ernst, D.M.; Hayden, S.J.; Lazzara, D.J.; Savoy, S.M.; Kostas-Polston, E. A psychometric toolbox for testing validity and reliability. J. Nurs. Sch. 2007, 39, 155–164. [Google Scholar] [CrossRef] [PubMed]
  187. Webster, J.; Watson, R.T. Analyzing the past to prepare for the future: Writing a literature review. MIS Q. 2002, 26, 13–23. [Google Scholar]
  188. Mac MacKenzie, S.B.; Podsakoff, P.M.; Jarvis, C.B. The Problem of Measurement Model Misspecification in Behavioral and Organizational Research and Some Recommended Solutions. J. Appl. Psychol. 2005, 90, 710–730. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  189. Briggs, R.O.; Reinig, B.A.; Vreede, G.-J. The Yield Shift Theory of Satisfaction and Its Application to the IS/IT Domain. J. Assoc. Inf. Syst. 2008, 9, 267–293. [Google Scholar] [CrossRef]
  190. Rushinek, A.; Rushinek, S.F. What makes users happy? Commun. ACM 1986, 29, 594–598. [Google Scholar] [CrossRef]
  191. Oliver, R.L. Measurement and evaluation of satisfaction processes in retail settings. J. Retail. 1981, 57, 24–48. [Google Scholar]
  192. Swanson, E.B.; Dans, E. System life expectancy and the maintenance effort: Exploring their equilibration. MIS Q. 2000, 24, 277. [Google Scholar] [CrossRef] [Green Version]
  193. Gill, T.G. Early expert systems: Where are they now? MIS Q. 1995, 19, 51. [Google Scholar] [CrossRef]
  194. Keil, M.; Mann, J.; Rai, A. Why software projects escalate: An empirical analysis and test of four theoretical models. MIS Q. 2000, 24, 631. [Google Scholar] [CrossRef]
  195. Campion, M.A.; Medsker, G.J.; Higgs, A.C.J.P.p. Relations between work group characteristics and effectiveness: Implications for designing effective work groups. Pers. Psychol. 1993, 46, 823–847. [Google Scholar] [CrossRef]
  196. Baas, P. Task-Technology Fit in the Workplace: Affecting Employee Satisfaction and Productivity; Erasmus Universiteit: Rotterdam, The Netherlands, 2010. [Google Scholar]
  197. Doolin, B.; Troshani, I. Organizational Adoption of XBRL. Electron. Mark. 2007, 17, 199–209. [Google Scholar] [CrossRef]
  198. Segars, A.H.; Grover, V. Strategic Information Systems Planning Success: An Investigation of the Construct and Its Measurement. MIS Q. 1998, 22, 139. [Google Scholar] [CrossRef]
  199. Sharma, R.; Yetton, P.; Crawford, J. Estimating the effect of common method variance: The method—Method pair technique with an illustration from TAM Research. MIS Q. 2009, 33, 473–490. [Google Scholar] [CrossRef] [Green Version]
  200. Gorla, N.; Somers, T.M.; Wong, B. Organizational impact of system quality, information quality, and service quality. J. Strateg. Inf. Syst. 2010, 19, 207–228. [Google Scholar] [CrossRef]
  201. Malhotra, N.K. Questionnaire design and scale development. The Handbook of Marketing Research: Uses, Misuses, and Future Advances; Sage: Thousand Oaks, CA, USA, 2006; pp. 83–94. [Google Scholar]
  202. Hertzog, M.A. Considerations in determining sample size for pilot studies. Res. Nurs. Health 2008, 31, 180–191. [Google Scholar] [CrossRef]
  203. Saunders, M.N. Research Methods for Business Students, 5th ed.; Pearson Education India: Bengaluru, India, 2011. [Google Scholar]
  204. Sekaran, U.; Bougie, R. Research Methods for Business, A Skill Building Approach; John Willey & Sons Inc.: New York, NY, USA, 2003. [Google Scholar]
  205. Tellis, W. Introduction to case study. Qual. Rep. 1997, 3, 2. [Google Scholar]
  206. Whitehead, A.L.; Julious, S.A.; Cooper, C.L.; Campbell, M.J. Estimating the sample size for a pilot randomised trial to minimise the overall trial sample size for the external pilot and main trial for a continuous outcome variable. Stat. Methods Med. Res. 2016, 25, 1057–1073. [Google Scholar] [CrossRef]
  207. Hair Jr, J.F.; Hult, G.T.M.; Ringle, C.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  208. Chin, W.W.; Marcolin, B.L.; Newsted, P.R. A partial least squares latent variable modeling approach for measuring interaction effects: Results from a Monte Carlo simulation study and an electronic-mail emotion/adoption study. Inf. Syst. Res. 2003, 14, 189–217. [Google Scholar] [CrossRef] [Green Version]
  209. Hulland, J. Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strateg. Manag. J. 1999, 20, 195–204. [Google Scholar] [CrossRef]
  210. Gefen, D.; Rigdon, E.E.; Straub, D. Editor’s comments: An update and extension to SEM guidelines for administrative and social science research. MIS Q. 2011, 35, 3–14. [Google Scholar] [CrossRef]
  211. Chin, W.W. How to write up and report PLS analyses. In Handbook of Partial Least Squares; Springer: Berlin/Heidelberg, Germany, 2010; pp. 655–690. [Google Scholar]
  212. Ainuddin, R.A.; Beamish, P.W.; Hulland, J.S.; Rouse, M.J. Resource attributes and firm performance in international joint ventures. J. World Bus. 2007, 42, 47–60. [Google Scholar] [CrossRef]
  213. Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The use of partial least squares path modeling in international marketing. In New Challenges to International Marketing; Emerald Group Publishing Limited: Bingley, UK, 2009; pp. 277–319. [Google Scholar]
  214. Urbach, N.; Ahlemann, F. Structural equation modeling in information systems research using partial least squares. J. Inf. Technol. Theory Appl. 2010, 11, 5–40. [Google Scholar]
  215. Sommerville, I. Software Engineering, 9th ed.; Pearson Education Limited: Harlow, UK, 2011; p. 18. ISBN 0137035152. [Google Scholar]
  216. Venkatesh, V.; Bala, H. Technology acceptance model 3 and a research agenda on interventions. Decis. Sci. 2008, 39, 273–315. [Google Scholar] [CrossRef] [Green Version]
  217. Venkatesh, V.; Davis, F.D. A theoretical extension of the technology acceptance model: Four longitudinal field studies. Manag. Sci. 2000, 46, 186–204. [Google Scholar] [CrossRef] [Green Version]
  218. Taylor, S.; Todd, P.A. Understanding information technology usage: A test of competing models. Inf. Syst. Res. 1995, 6, 144–176. [Google Scholar] [CrossRef]
  219. Faulkner, L. Beyond the five-user assumption: Benefits of increased sample sizes in usability testing. Behav. Res. Methods Instrum. Comput. 2003, 35, 379–383. [Google Scholar] [CrossRef]
  220. Turner, C.W.; Lewis, J.R.; Nielsen, J. Determining usability test sample size. Int. Encycl. Ergon. Hum. Factors 2006, 3, 3084–3088. [Google Scholar]
  221. Zaman, H.; Robinson, P.; Petrou, M.; Olivier, P.; Shih, T.; Velastin, S.; Nystrom, I. Visual Informatics: Sustaining Research and Innovations; LNCS, Springer: Selangor, Malaysia, 2011. [Google Scholar]
  222. Hadi, A.; Daud, W.M.F.W.; Ibrahim, N.H. The development of history educational game as a revision tool for Malaysia school education. In Proceedings of the International Visual Informatics Conference, Selangor, MY, USA, 9–11 November 2011; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  223. Marian, A.M.; Haziemeh, F.A. On-Line Mobile Staff Directory Service: Implementation for the Irbid University College (Iuc). Ubiquitous Comput. Commun. J. 2011, 6, 25–33. [Google Scholar]
  224. Brinkman, W.-P.; Haakma, R.; Bouwhuis, D. The theoretical foundation and validity of a component-based usability questionnaire. Behav. Inf. Technol. 2009, 28, 121–137. [Google Scholar] [CrossRef]
  225. Mikroyannidis, A.; Connolly, T. Case Study 3: Exploring open educational resources for informal learning. In Responsive Open Learning Environments; Springer: Cham, Switzerland, 2015; pp. 135–158. [Google Scholar]
  226. Shanmugam, M.; Yah Jusoh, Y.; Jabar, M.A. Measuring Continuance Participation in Online Communities. J. Theor. Appl. Inf. Technol. 2017, 95, 3513–3522. [Google Scholar]
  227. Straub, D.; Boudreau, M.C.; Gefen, D. Validation guidelines for IS positivist research. Commun. Assoc. Inf. Syst. 2004, 13, 24. [Google Scholar] [CrossRef]
  228. Lynn, M.R. Determination and quantification of content validity. Nurs. Res. 1986, 35, 382–385. [Google Scholar] [CrossRef]
  229. Dobratz, M.C. The life closure scale: Additional psychometric testing of a tool to measure psychological adaptation in death and dying. Res. Nurs. Health 2004, 27, 52–62. [Google Scholar] [CrossRef]
  230. Davis, L.L. Instrument review: Getting the most from a panel of experts. Appl. Nurs. Res. 1992, 5, 194–197. [Google Scholar] [CrossRef]
  231. Polit, D.F.; Beck, C.T. The content validity index: Are you sure you know what’s being reported? Critique and recommendations. Res. Nurs. Health 2006, 29, 489–497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  232. Qasem, Y.A.M.; Asadi, S.; Abdullah, R.; Yah, Y.; Atan, R.; Al-Sharafi, M.A.; Yassin, A.A. A Multi-Analytical Approach to Predict the Determinants of Cloud Computing Adoption in Higher Education Institutions. Appl. Sci. 2020, 10. [Google Scholar] [CrossRef]
  233. Coolican, H. Research Methods and Statistics in Psychology; Psychology Press: New York, NY, USA, 2017. [Google Scholar]
  234. Briggs, S.R.; Cheek, J.M. The role of factor analysis in the development and evaluation of personality scales. J. Personal. 1986, 54, 106–148. [Google Scholar] [CrossRef]
  235. Tabachnick, B.G.; Fidell, L.S. Principal components and factor analysis. Using Multivar Stat. 2001, 4, 582–633. [Google Scholar]
  236. Gliem, J.A.; Gliem, R.R. Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. In Proceedings of the 2003 Midwest Research-to-Practice Conference in Adult, Continuing, and Community, Jeddah, Saudi Arabia, 8–10 October 2003. [Google Scholar]
  237. Mallery, P.; George, D. SPSS for Windows Step by Step: A Simple Guide and Reference; Allyn & Bacon: Boston, MA, USA, 2003. [Google Scholar]
  238. Nunnally, J.C. Psychometric Theory 3E; Tata McGraw-Hill Education: New York, NY, USA, 1994. [Google Scholar]
  239. Bagozzi, R.P.; Yi, Y. On the evaluation of structural equation models. J. Acad. Mark. Sci. 1988, 16, 74–94. [Google Scholar] [CrossRef]
  240. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  241. Gefen, D.; Straub, D.; Boudreau, M.-C. Structural Equation Modeling and Regression: Guidelines for Research Practice. Commun. Assoc. Inf. Syst. 2000, 4, 7. [Google Scholar] [CrossRef] [Green Version]
  242. Chin, W.W.; Marcoulides, G. The partial least squares approach to structural equation modeling. Mod. Methods Bus. Res. 1998, 295, 295–336. [Google Scholar]
  243. Diamantopoulos, A.; Siguaw, J.A. Formative versus reflective indicators in organizational measure development: A comparison and empirical illustration. Br. J. Manag. 2006, 17, 263–282. [Google Scholar] [CrossRef]
  244. Hair, J.F.; Ringle, C.M.; Sarstedt, M. Partial least squares structural equation modeling: Rigorous applications, better results and higher acceptance. Long Range Plan. 2013, 46, 1–12. [Google Scholar] [CrossRef]
  245. Cenfetelli, R.T.; Bassellier, G. Interpretation of formative measurement in information systems research. MIS Q. 2009, 33, 689–707. [Google Scholar] [CrossRef]
  246. Yoo, Y.; Henfridsson, O.; Lyytinen, K. Research commentary—The new organizing logic of digital innovation: An agenda for information systems research. Inf. Syst. Res. 2010, 21, 724–735. [Google Scholar] [CrossRef]
  247. Nylén, D.; Holmström, J. Digital innovation strategy: A framework for diagnosing and improving digital product and service innovation. Bus. Horiz. 2015, 58, 57–67. [Google Scholar] [CrossRef] [Green Version]
  248. Maksimovic, M. Green Internet of Things (G-IoT) at engineering education institution: The classroom of tomorrow. Green Internet Things 2017, 16, 270–273. [Google Scholar]
  249. Fortino, G.; Rovella, A.; Russo, W.; Savaglio, C. Towards cyberphysical digital libraries: Integrating IoT smart objects into digital libraries. In Management of Cyber Physical Objects in the Future Internet of Things; Springer: Berlin/Heidelberg, Germany, 2016; pp. 135–156. [Google Scholar]
  250. Picciano, A.G. The evolution of big data and learning analytics in American higher education. J. Asynchronous Learn. Netw. 2012, 16, 9–20. [Google Scholar] [CrossRef] [Green Version]
  251. Ifinedo, P. An empirical analysis of factors influencing Internet/e-business technologies adoption by SMEs in Canada. Int. J. Inf. Technol. Decis. Mak. 2011, 10, 731–766. [Google Scholar] [CrossRef]
  252. Talib, A.M.; Atan, R.; Abdullah, R.; Murad, M.A.A. Security framework of cloud data storage based on multi agent system architecture—A pilot study. In Proceedings of the 2012 International Conference on Information Retrieval and Knowledge Management, CAMP’12, Kuala Lumpur, Malaysia, 13–15 March 2012. [Google Scholar]
  253. Adrian, C.; Abdullah, R.; Atan, R.; Jusoh, Y.Y. Factors influencing to the implementation success of big data analytics: A systematic literature review. In Proceedings of the International Conference on Research and Innovation in Information Systems, ICRIIS, Langkawi, Malaysia, 16–17 July 2017. [Google Scholar]
Figure 1. Life cycle of an IS [95].
Figure 1. Life cycle of an IS [95].
Applsci 10 06628 g001
Figure 2. Expectation confirmation model of IS continuance [101].
Figure 2. Expectation confirmation model of IS continuance [101].
Applsci 10 06628 g002
Figure 3. Research Model.
Figure 3. Research Model.
Applsci 10 06628 g003
Figure 4. Prototype Development Processes [215].
Figure 4. Prototype Development Processes [215].
Applsci 10 06628 g004
Table 1. Literature on CC Continuance.
Table 1. Literature on CC Continuance.
Level of AnalysisAdoption PhaseTheoretical PerspectiveType
INDORGPREPOSTISCISSISDTOEOTHEMPTHEO
[14]
[72]
[75]
[76] *
[71] **
[77]
[45] *
[78]
[79]
[70]
[80]
[49]
[14]
SUM41021356323121
This Research
Legend: IND = Individual; ORGA = Organizational; PRE = Pre-Adoption; POST = Post-Adoption; ISC = Information System Continuance; ISS = IS Success Model; ISD = IS Discontinuance Model; TOE = Technology–Organization–Environment Framework; OTH = Others; THEO = Theoretical/Conceptual; EMP = Empirical. * Study examines adopters’ and non-adopters’ intention to increase the level of sourcing; thus, it is categorized as adoption. ** Study examines adopters’ intention at individual and organizational levels; thus, it is categorized as an individual.
Table 2. Life Cycle of An IS with Different Theoretical Approaches.
Table 2. Life Cycle of An IS with Different Theoretical Approaches.
Life Cycle PhasesAdoptionUsageTermination
User/organization TransformationIntent to adoptContinuance usage intentionDiscontinuance usage intention
End-user stateNo userUserEx-user
Individual Level-based theoriesTAM [98], and UTAT model [99]ECT [100], which has taken shape in the ISC model [101]
Organizational Level-based theoriesTOE framework [102], DOI [103], and Social Contagion [104]ISS model [105]ISD model [16]
Legend: IS = Information System, TAM = User Acceptance Model, UTAT = Unified Theory of Acceptance and Use of Technology, ECT = Expectation Confirmation Theory, ISC = Information System Continuance model, ISS = Information System Success model, ISD = Information System Discontinuance model, TOE = Technology–Environment–Organization Framework, DOI = Diffusion of Innovation theory, ITMAP = Information Technology Post Adoption Model.
Table 3. Mapping matrix of model constructs from ISC, ISS, ISD, and TOE.
Table 3. Mapping matrix of model constructs from ISC, ISS, ISD, and TOE.
Theory/ModelTechnology/Dependent VariableSourceConstructs/Independent Variables
SatisfactionConfirmationTechnologyOrganizationEnvironment
Net BenefitsTechnology IntegrationSystem QualityInformation QualitySystem IntegrationCollaborationRegulatory PolicyCompetitive Pressures
ISDOrganizational level information System discontinuance intentions.[16]
ISCInformation system continuance.[101]
ISSInformation system success.[105,120]
ECM & TOEEnterprise 2.0 post-adoption.[72]
TAMContinuance intention to use CC.[75]
ISC & OTHDisruptive technology continuous adoption intentions.[76]
ISSCC evaluation[77]
ISS & ISDCloud-Based Enterprise Systems.[78]
ISCSaaS-based collaboration tools.[45]
ISCCC client-provider relationship.[70]
ISCOperational Cloud Enterprise System.[152]
OTHUsage and adoption of CC. by SMEs[143]
TOEKnowledge management systems diffusion[153]
TCTInformation technology adoption behavior life cycle[154]
ISCWearable Continuance[155]
Legend: TOE = Technology–Organization–Environment Framework; ISC = Information System Continuance Model; ISS = IS Success Model; ISD = IS Discontinuance Model; TAM = Technology Acceptance Model; TCT = Technology Continuance Theory; OTH = Others.
Table 4. Constructs and definitions.
Table 4. Constructs and definitions.
ConstructsDefinitionLiterature SourcesPrevious Studies
Net Benefits (Formative)Extent to which an information system benefits individuals, groups, or organizations.[105,116,120][14,77,78,152]
System Quality (Formative)Desirable features of a system (e.g., reliability, timeliness, or ease of use).[105,116,120][14,77,78,152]
Information Quality (Formative)Desirable features of a system’s output (e.g., format, relevance, or completeness).[105,116,120][14,77,78,152]
Confirmation (Reflective)Extent to which a user in a HEI feels satisfied when the outcomes are consistent with (or exceed) their expectations or desires, or when the outcomes are inconsistent with or below their expectations or desires.[101,189,190][45,72]
Satisfaction (Reflective)Psychological state that results when the emotion linked to disconfirmed expectations is paired with the user’s previous attitudes towards the consumption experience.[101,191][45,72,76]
Technical Integration (Reflective)Extent to which an information system depends on intricate connections with different technological elements.[16,192][14,78]
System Investment (Reflective)Resources, both financial and otherwise, that the institution has applied to acquire, implement, and use an information system.[16,193,194][14,78]
Collaboration (Reflective)Extent to which CC application supports cooperation and collaboration among stakeholders.[195,196][42,142,143,144]
Regulatory Policy (Reflective)Extent to which government policy supports, pressures, or protects the continued use of CC applications.[147,169,170][145,146]
Competitive Pressure (Reflective)Pressure perceived by institutional leadership that industry rivals may have won a significant competitive advantage using CC applications.[170,171,197][72,145]
Continuance Intention (Reflective)Extent to which organizational decision makers are likely to continue using an information system.[16,101][45,72,76]
Table 5. Reliability Statistics.
Table 5. Reliability Statistics.
ConstructNo. ItemsMin Inter-Item CorrelationMax Inter-Item CorrelationCronbach’s Alpha
CC Continuance Use30.6560.8280.907
Satisfaction40.6840.810.916
Confirmation50.1240.7950.78
Net Benefit13--0.916
Technical Integration30.7110.7880.891
System Quality13--0.927
Information Quality7--0.928
System Investment30.670.7310.836
Collaboration50.5040.7420.899
Regulatory Policy50.3980.8210.894
Competitive Pressure 40.5770.7720.86
All Items65 0.8620.913
Table 6. Quantitative Assessment of Measurement Model (Reflective).
Table 6. Quantitative Assessment of Measurement Model (Reflective).
Cloud Computing Continuance Use (Reflective)Loadings 1AVE 2CR 3
CCCU10.8960.8450.942
CCCU20.961
CCCU30.898
Confirmation (Reflective)LoadingsAVECR
CON10.5050.5440.852
CON20.622
CON30.876
CON40.786
CON50.832
Satisfaction (Reflective)LoadingsAVECR
SAT10.8720.7980.941
SAT20.887
SAT30.917
SAT40.897
Technical Integration (Reflective)LoadingsAVECR
TE10.9230.8240.934
TE20.876
TE30.924
System Investment (Reflective)LoadingsAVECR
SI10.910.7990.923
SI20.871
SI30.901
Collaboration (Reflective)LoadingsAVECR
COL10.880.7160.926
COL20.775
COL30.835
COL40.867
COL50.87
Regulatory Policy (Reflective)LoadingsAVECR
RP10.8090.6970.92
RP20.84
RP30.859
RP40.853
RP50.812
Competitive Pressure (Reflective)LoadingsAVECR
CP10.7760.7480.922
CP20.904
CP30.878
CP40.895
1 All Item Loading > 0.5 Indicates Indicator Reliability; 2 All Average Variance Expected (AVE) > 0.5 as indicates Convergent Reliability; 3 All Composite Reliability > 0.7 Indicates Internal Consistency.
Table 7. Discriminant Validity (Fornell-Larker Criterion).
Table 7. Discriminant Validity (Fornell-Larker Criterion).
Latent ConstructCCCUCOLCPConfIQNBRPSISQSATTE
CC Continuance Use0.919
Collaboration0.850.846
Competitive Pressure−0.524−0.5410.865
Confirmation−0.201−0.2770.3970.737
Information Quality−0.615−0.5790.3810.328formative
Net Benefits0.7190.736−0.592−0.617−0.733formative
Regulatory Policy0.4090.472−0.821−0.306−0.3780.5130.835
System Investment0.9030.807−0.458−0.251−0.5780.6950.3830.894
System Quality0.8380.789−0.593−0.481−0.7820.8770.4380.776formative
Satisfaction0.4730.54−0.468−0.58−0.8120.7940.4620.4810.8050.894
Technical Integration0.8940.739−0.479−0.256−0.6960.6470.3960.8090.7940.5040.908
Table 8. Quantitative Assessment of Measurement Model (Formative).
Table 8. Quantitative Assessment of Measurement Model (Formative).
Redundancy Analysis, Assessing Multicollinearity, Significance and Contribution
Net Benefits (formative)VIFt-valuesWeightsLoadings
NB13.6150.6040.1250.633
NB23.2761.320.3130.802
NB36.6951.653−0.4890.648
NB43.0981.5610.3460.71
NB52.2021.6180.230.535
NB61.9420.8010.1340.691
NB75.6171.7850.6080.876
NB83.8960.6320.1460.609
NB93.2360.9880.2140.641
NB101.510.9320.1580.45
NB116.6531.149−0.3910.721
NB122.0530.231−0.0390.594
Net Benefits (Reflective)F2
Redundancy Analysis0.763
NB13
System Quality (formative)VIFt-valuesWeightsLoadings
SQ14.1970.115−0.0190.75
SQ23.7150.8540.1410.626
SQ32.571.3970.1650.794
SQ42.1821.2220.1340.72
SQ55.3511.2030.2040.867
SQ65.5820.199−0.0350.726
SQ72.6152.3990.2620.771
SQ81.30.711−0.0650.184
SQ94.4351.3920.2390.768
SQ101.920.0460.0050.707
SQ113.7491.260.1560.659
SQ122.4340.6920.0750.82
System Quality (reflective)F2
Redundancy Analysis0.784
SQ13
Information Quality (formative)VIFt-valuesWeightsLoadings
IQ13.1220.366−0.0790.664
IQ23.2320.9950.2280.874
IQ32.841.5690.3480.839
IQ43.781.7870.4360.874
IQ54.7530.8380.2190.831
IQ62.9280.017−0.0040.727
Information Quality (reflective)F2
Redundancy Analysis0.884
IQ7

Share and Cite

MDPI and ACS Style

Qasem, Y.A.M.; Abdullah, R.; Yaha, Y.; Atana, R. Continuance Use of Cloud Computing in Higher Education Institutions: A Conceptual Model. Appl. Sci. 2020, 10, 6628. https://0-doi-org.brum.beds.ac.uk/10.3390/app10196628

AMA Style

Qasem YAM, Abdullah R, Yaha Y, Atana R. Continuance Use of Cloud Computing in Higher Education Institutions: A Conceptual Model. Applied Sciences. 2020; 10(19):6628. https://0-doi-org.brum.beds.ac.uk/10.3390/app10196628

Chicago/Turabian Style

Qasem, Yousef A. M., Rusli Abdullah, Yusmadi Yaha, and Rodziah Atana. 2020. "Continuance Use of Cloud Computing in Higher Education Institutions: A Conceptual Model" Applied Sciences 10, no. 19: 6628. https://0-doi-org.brum.beds.ac.uk/10.3390/app10196628

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop