Next Article in Journal
Convolutional Extreme Learning Machines: A Systematic Review
Next Article in Special Issue
Working and Learning during the COVID-19 Confinement: An Exploratory Analysis with a Small Sample from Portugal
Previous Article in Journal
Should We Be Concerned about How Information Privacy Concerns Are Measured in Online Contexts? A Systematic Review of Survey Scale Development Studies
Previous Article in Special Issue
Academic Activities Recommendation System for Sustainable Education in the Age of COVID-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Factors Affecting the Use of Smart Mobile Examination Platforms by Universities’ Postgraduate Students during the COVID-19 Pandemic: An Empirical Study

by
Muhammad Turki Alshurideh
1,2,
Barween Al Kurdi
3,
Ahmad Qasim AlHamad
4,
Said A. Salloum
5,6,*,
Shireen Alkurdi
7,
Ahlam Dehghan
1,
Mohammad Abuhashesh
8 and
Ra’ed Masa’deh
9
1
Department of Management, University of Sharjah, Sharjah 27272, United Arab Emirates
2
Department of Marketing, School of Business, University of Jordan, Amman 11942, Jordan
3
Department of Business Administration, Faculty of Economics and Administrative Sciences, The Hashemite University, Zarqa 13115, Jordan
4
Information Systems Department, University of Sharjah, Sharjah 27272, United Arab Emirates
5
Machine Learning and NLP Research Group, University of Sharjah, Sharjah 27272, United Arab Emirates
6
School of Science, Engineering, and Environment, University of Salford, Manchester M5 4WT, UK
7
Humanities Department, Al-Balqa Applied University, Amman 11134, Jordan
8
E-Marketing and Social Media Department, Princess Sumaya University for Technology (PSUT), Amman 11941, Jordan
9
Management Information Systems Department, School of Business, University of Jordan, Amman 11942, Jordan
*
Author to whom correspondence should be addressed.
Submission received: 29 March 2021 / Revised: 26 April 2021 / Accepted: 29 April 2021 / Published: 30 April 2021

Abstract

:
Recent years have seen an increasingly widespread use of online learning technologies. This has prompted universities to make huge investments in technology to augment their position in the face of extensive competition and to enhance their students’ learning experience and efficiency. Numerous studies have been carried out regarding the use of online and mobile phone learning platforms. However, there are very few studies focusing on how university students will accept and adopt smartphones as a new platform for taking examinations. Many reasons, but most recently and importantly the COVID-19 pandemic, have prompted educational institutions to move toward using both online and mobile learning techniques. This study is a pioneer in examining the intention to use mobile exam platforms and the prerequisites of such intention. The purpose of this study is to expand the Technology Acceptance Model (TAM) by including four additional constructs: namely, content quality, service quality, information quality, and system quality. A self-survey method was prepared and carried out to obtain the necessary basic data. In total, 566 students from universities in the United Arab Emirates took part in this survey. Smart PLS was used to test the study constructs and the structural model. Results showed that all study hypotheses are supported and confirmed the effect of the TAM extension factors within the UAE higher education setting. These outcomes suggest that the policymakers and education developers should consider mobile exam platforms as a new assessment platform and a possible technological solution, especially when considering the distance learning concept. It is good to bear in mind that this study is initial and designed to explore using smartphones as a new platform for student examinations. Furthermore, mixed-method research is needed to check the effectiveness and the suitability of using the examination platforms, especially for postgraduate higher educational levels.

1. Introduction

In modern times, mobile phones are important in all aspects of life. One paper [1] reported that, globally, there are 3.39 billion internet users, with 5.11 billion mobile users. The use of smartphone technologies has been researched in different educational aspects, such as preparation for examination [2] and enhancing students’ vocabulary development [3]. Other scholars have investigated the use of mobile technologies and applications in student learning [4,5,6], mobile blended learning [7], enhancing learner participation and transforming pedagogy [8], to conduct student voting and the enhancement of engagement and participation [9]. Some scholars have looked at smartphone applications in the medical field [10,11] and those of engineering and technical education [12]. There is little investigation, however, into the use of the mobile phone as an accepted examination platform. Therefore, this study aims to shed some light on this subject. While studying mobile examination platforms is essential for all education stakeholders, it is particularly important to students and faculty members. Mobile examination platforms provide candidates with the means to take their exams on their phones at a time that is most suitable for them [2]. Conveniently, examiners can start the exam using mobile technology from any location [13]. These platforms are gaining in popularity because it is very easy to access them at any time [14]. In addition to examination platforms, mobile applications can be used for multiple other purposes, such as paper assessment, knowledge sharing, voting, and student registration [15,16].

1.1. Actual Use of Mobile Examination Platforms

As mentioned by one paper [17], information technology and innovation have become an undeniable and important part of the educational process [18]. Al-Hakeem et al. [19] stated that the online examination system is suitable for distance learning since containing the virtual appearance of lecturers and students appropriately. Within the same theme, students can use mobile apps to take an exam from distant locations [20]. Additionally, mobile examination platforms are used to monitor the academic progress of students [21]. As opined by Sung et al. [22], mobile exam platforms assist teachers to evaluate the theoretical and practical knowledge of students without concerning themselves with time and venue. Although desktop and tablet computers offer high bandwidth display and far better interactivity than smartphones, Lim [23] supported this concept because it eases the process of web-based learning by discarding the usage of desktop and tablets. The integration of mobile examination platforms has positively affected the academic performance of the student. As stated by Nikou et al. [24], portability, wireless communication, and sensitivity give the platforms the advantage over the traditional classroom examination system. For instance, the mobile examination platform “Kahoot” is used for educational purposes, serving to conduct live quizzes in class to assess student learning. The platform helps to prepare questions and distribute them among the students to assess the growth of their learning skills. Kahoot provides a range of question approaches, such as polls, quizzes, puzzles, and slides. It also makes the evaluation process easier for both teachers and students. Teachers support the usage of the mobile examination platform because it automatically calculates the grades of students and it publishes the results after the exam without consuming time [25]. According to Nikou et al. [24], students can also ascertain their practice level and take necessary steps to enhance their academic performance. Lalitha et al. [26] observed that mobile exam platforms offer user acceptance services and reduce the chances of copy-pasting and cheating to a large extent. Kaiiali et al. [27] added that mobile examination platforms control user privacy and prevent the opening of any other window until the exam is completed.

1.2. The Importance of Mobile Examination Platforms

Shyshkanova et al. [28] listed some of the advantageous features of mobile examination platforms, saying that they save both time and money, as well as offering security, confidentiality, and accessibility. Katz [29] pointed out that, with the help of the mobile examination platform, instructors are relieved of the task of creating exam papers and having to arrange an examination venue and timeslots. Han et al. [30] highlighted the security and confidentiality features of the platforms. These are critical because they help in retaining the integrity of the exam and assist in evaluating the actual academic performance of students [31]. Kadam et al. [32] pointed out that any leakage from the online platform would compromise standards. However, the mobile exam platform assures the maintenance of security and confidentiality [33]. Furthermore, the existing literature shows that the mobile examination platform also offers statistical analysis of students based on their performance [34]. As stated by Chang et al. [35], the platform provides a student performance graph after the end of the exam so that both students and teachers can use it for evaluation purposes and feedback. The conduction of online exams via mobile is cheaper since there are no printing and paper costs incurred [36]. Administrators wishing to decrease expenses are likely to favor the transition from paper copy exams to the use of mobile exam platforms [24]. Another benefit of using mobile examination platforms is that they help us to save time [37]. The lengthy processes and formalities involved in formulating question papers, registering the students for exams, result declaration, and evaluation of the answer sheet are dispensed with completely with the mobile examination platforms [38].
This paper is organized as follows. We begin by introducing the literature that frames our conceptualization, followed by the development of research hypotheses. Then, we describe our research methodology and empirical results. We conclude the article by discussing the implications of the research findings for both theory and practice.

2. Background

2.1. System Characteristics

2.1.1. Quality of System

Although researchers have failed to offer a uniform definition of the quality of the system, many of them consider it to refer to system accessibility, response time, and information quality. In this context, Alshurideh et al. [39] stated that perceived usefulness, customers’ acceptance, and ease of use are major criteria of quality for any Internet system. Aghazamani [40] found that features of a website or Internet system are the primary aspects affecting its level of acceptance. In this case, TAM’s main elements were found to significantly mediate the behavior of Internet system users [41]. Additionally, system quality factors can be seen as the most essential elements affecting internet-based services, such as mobile cloud services, mobile exams, mobile commerce, and mobile learning [42]. Alshurideh et al. [39] opined that the quality of the system significantly affects information quality and thereby customer satisfaction in the long run. Sife et al. [43] also found that service quality is influenced directly by the information available on the Internet and is measured mostly by the quality of the information. Based on the above explanations, the quality of the system effect on both perceived usefulness and perceived ease of use can be drawn as:
Hypothesis 1a (H1a).
System Quality (SYS) of mobile examination platforms has a significant positive effect on their perceived usefulness (PU).
Hypothesis 1b (H1b).
System Quality (SYS) of mobile examination platforms has a significant positive effect on their perceived ease of use (PEOU).

2.1.2. Information Quality

In all forms of business, the improvement of service quality remains a primary necessity, as it fosters both revenues and growth rates [44]. In another study, Salloum et al. [45] found that information quality is the salient factor that helps in predicting customer behavior and decision-making. Evans et al. [46] opined that information quality, perceived usefulness, and attitudes are major indicators that help in predicting the purchase behavior of customers. Based on this, Salloum et al. [47] stated that the enhancement of service quality goes hand in hand with that of information quality. These days, most organizations use the Internet to reach a wide range of customers and to increase their engagement in low-cost advertising [48]. However, the quality of information shared via the Internet remains a major concern and dilemma [49,50]. Furthermore, Al-Qaysi et al. [51] found that in such situations, the individual’s acceptance is strongly influenced by information quality and response time. Accordingly, information quality is also found to be a major intrinsic motivation for using computers and the Internet in the workplace and has remained the preliminary driver of several mobile services today [52]. Based on previous explanations, the hypotheses can be drawn as:
Hypothesis 2a (H2a).
Information quality (INF) of mobile examination platforms has a significant positive effect on their perceived usefulness (PU).
Hypothesis 2b (H2b).
Information quality (INF) of mobile examination platforms has a significant positive effect on their perceived ease of use (PEOU).

2.1.3. Content Quality

According to Bates et al. [53], improving the quality of the learning environment is imperative for enhancing e-learning efficiency. Chang et al. [54] stated that the learning environment primarily includes learning content, interaction, and learning management systems offered by different e-learning systems. Content quality is, therefore, another major aspect affecting the ease of use and perceived usefulness of different mobile and Internet applications [55]. Additionally, Chen [56] found through significant investigations that content quality also impacts information quality, affects behavioral intentions of customers, and primarily consists of three dimensions which are information content, perceived ease of use, and perceived usefulness, according to Chou et al. [57]. In the case of e-learning, course quality, information, or content quality majorly assist users to promote perceived ease of use and perceived usefulness of mobile use [58]. Pituch et al. [59] particularly specified that improving content quality is important for increasing perceived web quality and interactivity. Based on the above explanations, the content quality effect can be drawn as:
Hypothesis 3a (H3a).
Content quality (CONT) of mobile examination platforms has a significant positive effect on their perceived usefulness (PU).
Hypothesis 3b (H3b).
Content quality (CONT) of mobile examination platforms has a significant positive effect on their perceived ease of use (PEOU).

2.1.4. Service Quality

According to the Technology Acceptance Model (TAM), both perceived usefulness and perceived ease of use as primary factors required for its effective use, as well as all quality keys associated with customer-centric services [60]. Gachago et al. [61] affirmed that improving service quality remains the primary aim and objective for all businesses, as it has major implications on overall productivity and profitability. It must be noted that enhancing service quality requires attention to a set of major dimensions [62]. Some of these are accessibility, the usefulness of the content, interaction, adequacy of information, and usability [63]. These factors also play a critical role in the case of e-commerce, suggesting that improving service quality is a primary necessity for enhancing e-commerce [64,65,66]. According to Davis [67], system and information quality are regarded as major determinants of perceived ease of use and perceived usefulness of any data or information. Based on the above explanations, the effect of service quality on both perceived usefulness and perceived ease of use can be drawn as:
Hypothesis 4a (H4a).
Service quality (SERV) of mobile examination platforms has a significant positive effect on their perceived usefulness (PU).
Hypothesis 4b (H4b).
Service quality (SERV) of mobile examination platforms has a significant positive effect on their perceived ease of use (PEOU).

2.2. The Technology Acceptance Model and User Beliefs

2.2.1. Perceived Ease of Use (PEOU)

As stated by Prestridge [68], perceived ease of use (PEOU) may be understood as the specific degree to which people believe using a certain system will be free of any effort. This measure largely facilitates new technology adoption and affects behavioral intention of using different social networks [69]. PEOU also tends to affect perceived usefulness [70]. Here, Keller et al. [71] identified that in the case of mobile learning and online course delivery systems, mentors influence students’ PEOU [72]. Additionally, Palmer [73] found that social influence affects users’ PEOU, suggesting that the two share an intricate relationship. Based on the above explanation, the relationship effect of PEOU on both perceived usefulness and intention to use mobile examination platforms can be expressed through the following hypotheses:
Hypothesis 5 (H5).
Perceived ease of use (PEOU) has a significant positive effect on the perceived usefulness (PU).
Hypothesis 6 (H6).
Perceived ease of use (PEOU) has a significant positive effect on the intention to use mobile exam platforms (INT).

2.2.2. Perceived Usefulness (PU)

According to Lee et al. [74], perceived usefulness (PU) may be considered the specific degree to which individuals believe that adopting a certain system will enhance overall job performance. PU also affects the behavioral intention of individuals to use particular social networks and is related to PEOU [75]. In the case of e-learning and mobile learning, PU is primarily affected by the instructor and the mentor, as well as social influence [76]. In this context, Lin [77] also argued that the level of satisfaction and PU influences users’ continuous intention. Here, the overall satisfaction level is dependent on consumers’ confirmation of expectations [78]. Based on the above explanation, the relationship effect of perceived usefulness on the intention to use mobile examination platforms can be drawn through the hypothesis detailed below.
It is well-known that mobile devices are being increasingly used as platforms for different interactive services [79]. Mobile exams and mobile learning management systems are two major services in this context, which help students in their academic endeavors [80]. In such a context, Joo et al. [81] found that exams administered via a smartphone are less expensive than conventional exams and have less scope for error, factors which promote students’ preference of mobile exams over manual testing. Moreover, mobile exams are more data-driven, quick, and efficient [82]. Han et al. [30] also found that mobile exams offer more security, provide quick results, and are compatible with different subjects and streams. Moreover, the automated tests are reusable, and therefore, allow students to strengthen their foundations by taking multiple tests [83]. Accordingly, the intention to use mobile exams is, therefore, prevalent in almost all countries today, owing to rising Internet usage and other online technologies in addition to mobile technologies [84]. Based on the above discussion, the researchers hypothesize:
Hypothesis 7 (H7).
Perceived usefulness (PU) has a significant positive effect on the intention to use mobile exam platforms (INT).
Based on explaining the above main study factors and the logical relations among them, the study model is illustrated here in Figure 1.

3. Materials and Methods

This section gives details regarding the data collection, the study instruments used, the survey structure, the pilot testing of the study constructs, and the study sample and its demographic data.

3.1. Data Collection

During the fall semester, between 15th September and 20th October 2020, the research team randomly distributed a total number of 600 hardcopy questionnaires among university students in the United Arab Emirates. Valid responses were received for 566 questionnaires, representing a total response rate of 94%. Certain missing values led to the rejection of 34 of these completed questionnaires. Hence, the team considered 566 properly filled and effective questionnaires, a figure which, according to Krejcie et al. [85], is an appropriate sample size level. Therefore, the assessment with structural equation modeling is acceptable as a sample size [86], which was subsequently employed for confirming the hypotheses. It is significant to note that present theories acted as the foundation for the hypotheses along with the incorporation of the Mobile-learning (M-learning) context. In order to assess the measurement model, structural equation modeling (SEM) (SmartPLS Version 3.2.7) was used by the group of researchers to examine the causal hypotheses based on the recommendation of [87]. For the improved action, the final path model was used.

3.2. Study Instrument

This research declared a survey instrument to validate the hypothesis. Intending to measure the seven constructs in the questionnaire, the survey incorporated more than 26 items. The sources of these constructs are shown in Table 1. To ensure the applicability of the study, the researchers made adjustments to questions from earlier studies.

3.3. Survey Structure

The students were provided with and asked to complete a questionnaire survey. The survey was divided into three sections:
1.
The first section concerned the personal data of the participants.
2.
The second section focused on the five items illustrating the general question regarding mobile-learning systems.
3.
The third section contained the 15 items that show Intention to use mobile examination platforms, Content quality, Information quality, Perceived Ease of Use, Perceived Usefulness, Quality of the system, and Service quality.
The 26 items were measured through a five-point Likert Scale with the following values: (1) Strongly disagree, (2) Disagree, (3) Neutral, (4) Agree, and (5) Strongly agree.

3.4. A Pilot Study of the Study Constructs

A pilot study helped to conclude the reliability of the questionnaire items. For the pilot study, about 60 students were selected at random from the population. The sample size comprised 600 students and this was 10% of the total sample size of this research. Additionally, the criterion was closely followed. In order to assess the outcomes of the pilot study, the Cronbach’s alpha test was employed along with the help of IBM SPSS Statistics Version 23 (IBM, Armonk, NY, USA) for internal reliability. Thus, all the suitable conclusions for the measurement items were drawn. If the recommended outline of social science research studies is followed [98], then the reliability coefficient of 0.7 is deemed to be acceptable. Table 2 shows the Cronbach alpha values for the following seven measurement scales.

3.5. The Study Sample

The research team circulated hard copies of the questionnaire survey to students at a number of different universities in the United Arab Emirates (UAE) (N = 600).

3.6. The Study Sample’s Demographic Data

Table 3 encapsulates the study participants’ personal/demographic data. The ratio of male to female students was 52% to 48%, respectively. A total of 57% of the respondents fell into the age category of between 18 and 29 years, while 43% of the respondents were above 29 years old. Regarding the students’ academic majors, 43% studied Business Administration, 23% were enrolled in the College of Engineering and Information Technology, and 19% were enrolled in the College of Mass Communication and Public Relations, while 9% were students of General Education and 6% of Humanities and Social Sciences. All of the respondents were from an educated background and were in pursuit of a university degree. A total of 70% of the respondents held a Bachelor’s degree, while 19% possessed a Master’s degree. Furthermore, 11% of the respondents were holders of a doctoral degree, while the remainder were diploma holders. According to Al-Emran et al. [99], the “purposive sampling approach” is appropriate when access to the respondents is easy and they are willing to volunteer. The study sample was made up of students from different colleges, of different ages, and studying at different levels. In addition, the demographic data were measured with the help of IBM SPSS Statistics Version 23. The comprehensive demographic data of the respondents are shown in Table 3.

4. Results and Discussion

4.1. Data Analysis

To conduct the data analysis, the partial least squares-structural equation modeling (PLS-SEM) was used with the aid of SmartPLS V.3.2.7 software in this research [100]. To analyze the collected data, a two-step assessment approach was used that consists of a structural model and measurement model [66]. For this research, PLS-SEM is considered to be most suitable [101]. PLS-SEM [87] will help to deal with the investigative studies that consist of complex models. It also analyzes the whole model in one go [102]. PLS-SEM provides the concurrent analysis for both the measurement and structural model, which will also give accurate calculations [103].

4.1.1. Convergent Validity

Validity (having convergent and discriminant validity) and the construct reliability (including composite reliability (CR), Dijkstra-Henseler’s rho (pA), and Cronbach’s alpha (CA)) are taken into account for the evaluation of measurement model as stated by Hair et al. [66]. Table 4 illustrates that Cronbach’s alpha (CA) has the values between 0.718 and 0.897 in order to identify the construct reliability. These values surpass the threshold that is 0.7 [104]. The findings in Table 4 also shows that the values from 0.755 and 0.903 are part of the composite reliability (CR) and it is evident that these values are more than the threshold of 0.7 [105]. Thus, the researchers must use Dijkstra-Henseler’s rho (pA) reliability coefficient [106] in order to assess the construct reliability. In investigative research the reliability coefficient ρA values must be more than 0.7, similar to CA and CR, while values higher than 0.8 and 0.9 are used in later stages of study [104,107,108]. The reliability coefficient ρA of each measurement construct is bigger than 0.70 according to Table 4. It was presumed that all the constructs are accurate at reaching the final stage and the construct reliability has been checked against these findings.
According to Hair et al. [66], in order to carry out measurement of the convergent validity, we need to assess average variance extracted (AVE) and factor loading. Table 4 suggests that the value of 0.7 is still lesser than the factor loading values. While Table 1 has shown that the values provided by AVE that are from 0.598 and 0.741 are the ones that are greater than the threshold value of ‘0.5,’ the success in attaining convergent validity is dependent on the expected outcomes.

4.1.2. Discriminant Validity

In order to undertake the measurement of discriminant validity [66], it was suggested to measure two standards: the Fornell–Larker principle and the Heterotrait–Monotrait ratio (HTMT). As shown in Table 5 [109], the Fornell–Larker principle has verified the obligations as all the AVEs and their square roots are more than its correlations with other models.
In Table 6, the HTMT ratio outcomes are shown, illustrating that the threshold value of 0.85 is still above the value of every construct [69], leading to the establishment of the HTMT ratio. These findings help to know the discriminant validity. The results of the assessment show that there were no problems about the validity and reliability were faced during the measurement model’s evaluation. Thus, to use the collected data more productively, the structural model can be judged.

4.2. Model Fit

The standard root mean square residual (SRMR), exact fit criteria, d_ULS, d_G, Chi-Square, NFI, and RMS_theta are the fit measures provided by the Smart PLS that demonstrate the model fit in PLS-SEM [110]. The difference between the expressed correlations and the correlations from the model that made use of the correlation matrix [87] in accordance with the SRMR, of which the values are considered as good model fit measures [111] when they are below 0.08, while NFI values higher than 0.90 are considered as the model fit [112]. The ratio of the Chi2 value of the proposed model to the null/benchmark model is the NFI [113]. The NFI is not suitable to be the model fit measure since the larger the parameters are, the higher the NFI is Hair et al. [87]. The two metrics are: squared Euclidean distance, d_ULS, and the geodesic distance d_G, which help to find any discrepancy between the empirical covariance matrix and the covariance matrix as understood by the composite factor model [87,106]. Only in the reflective models, the RMS theta can be implied, and this will appraise the outer model residuals’ correlation degree [113]. If the values are lower than 0.12, they will be known as a good fit, and when the RMS theta value is nearer to zero, the PLS-SEM model will be considered as better; otherwise, the values will not show a good fit [114]. It was recommended by Hair et al. [87] that the estimated model considers the total impact and model structures; on the other hand, the saturated model assesses the connection between all constructs.
According to Table 7, the RMS Theta value was around 0.082, which means that in order to exhibit the global PLS model validity, the required goodness-of-fit for the PLS-SEM model is sufficient.

4.3. Hypotheses Testing—Path Coefficient

When the measurement model is confirmed, the next step is the structural model [115,116,117,118,119,120]. Through a bootstrapping procedure containing 5000 re-samples, this involves determining the path coefficients and the coefficient of determination (R2) [66]. The structural equation model had a high predictive power, as shown in Figure 2 and Table 8 [121], which also shows that the variance’s percentages, i.e., almost 71%, 72%, and 73%, are the percentage of the variance in the perceived usefulness, perceived ease of use, and intention to use mobile examination platforms, respectively. This model was used along with Smart PLS and had a maximum likelihood estimation in order to know the interdependence of a range of theoretical constructs of the structural model [87,122,123]. Concerning the path analysis, the path coefficients, t-values, and p-values for each hypothesis are shown in Table 9, and all hypotheses have been supported. Based on the data analysis hypotheses H1a, H1b, H2a, H2b, H3a, H3b, H4a, H4b, H5, H6, and H7 were supported by the empirical data. The Quality of the system (SYS), Information quality (INF), Content quality (CONT), and Service quality (SERV) have significant effects on the Perceived Ease of Use (PEOU): β = 0.436, p < 0.001, β = 0.769, p < 0.001, β = 0.158, p < 0.05, β = 0.318, p < 0.05, respectively; hence, H1a, H2a, H3a, and H4a are supported. The Quality of the system (SYS), Information quality (INF), Content quality (CONT), and Service quality (SERV) also have significant effects on the Perceived Usefulness (PU): β = 0.287, p < 0.001, β = 0.335, p < 0.001, β = 0.789, p < 0.001, β = 0.531, p < 0.001, respectively; hence, H1b, H2b, H3b, and H4b are supported. Finally, the results also showed that the Perceived Ease of Use (PEOU) significantly influenced the Perceived Usefulness (PU) (β = 0.262, p < 0.001) and Intention to use mobile exam platforms (INT) (β = 0.487, p < 0.001), supporting hypotheses H5 and H6, respectively. The Perceived Usefulness (PU) was determined to be significant in affecting the Intention to use mobile examination platforms (INT) (β = 0.366, p < 0.001), supporting hypothesis H7.

5. Conclusions

The data gathered clearly indicate that the majority of the study sample considered mobile learning platforms to be a convenient tool of assessment. Among the study sample, participants of the age group 18–29 years particularly expressed interest in using mobile examination platforms, which could help in adopting new assessment techniques, which in turn makes the assessment process easier.
The results show that the main parameter that promotes students’ use of mobile examination platforms is system quality. If users find the quality of the system to be high, their willingness and intention to use such new examining approaches properly are boosted. This confirms the view of Akar et al. [42], who saw system quality as the most essential element affecting both Internet- and mobile-based services, such as mobile cloud services, mobile learning and exams services, and even mobile commerce services. Moreover, this study found that information quality plays an essential role in both perceived ease of use and perceived usefulness of mobile examination platforms. Many scholars confirmed these results. Davis [67], for example, declared that information quality is regarded as one of the major determinants of the perceived ease of use and perceived usefulness of any data or information used. Moreover, the collected data and results show that the quality of the content of both mobile learning and mobile exam platforms also affect their usefulness. The comprehensive and superior quality content of mobile learning and mobile examination platforms helps students in acquiring subtle knowledge and test such knowledge directly in any taught subject. The comprehensiveness and superior quality of the content help students to master the subject matter, especially seeing as access is flexible and they can read the topics at a convenient time, and examine themselves many times accordingly. Thus, for potential users, the better use of mobile examination platforms comes by improving the quality of the content, which in turn helps to maximize the users’ benefits and practices.
Service quality was found to influence both ease of use and perceived usefulness of mobile examination platforms. According to Freeze et al. [60], both perceived ease of use and perceived usefulness are primary indicators for the effective use for any system and the quality of such a system was found essential for customer-centric provided services. Additionally, according to Simonova [12], Al-Dweeri et al. [124] and Al Dmour et al. [125], improving the service quality remains the primary aim for business organizations that provide a wide range of services, especially those using mobile service applications. Accordingly, it becomes clear that mobile examination platforms have made the learning process convenient for the majority of students in different disciplines, such as engineering, medicine, business, and Information Technology. While these platforms can be used for taking online exams, they also serve to enhance the innovative learning platforms through hosting brainstorming sessions and holding interactive lectures. A good example of that is mentioned by Akour et al. [126] and Bacca-Acosta et al. [127], which in turn helps in enhancing students’ retention and satisfaction [128,129,130]. System quality and content quality are found to be prerequisite drivers that affect students’ acceptance and adoption of mobile examination and learning applications, as declared by Liu et al. [131]. The offered system quality and the quality of the content help students to better perceive the level to which a particular mobile examination application can be useful to them and how user-friendly it is. This issue is discussed and confirmed by many scholars, such as Day et al. [132], who confirmed the need for high-quality, safe content in teaching mobile applications, especially the technical ones. Moreover, Gorla et al. [133] pointed out the necessity of high-quality IT management systems, information, and services, which in turn, affect users’ ability to use mobile examination platforms efficiently.

5.1. Theoretical and Practical Implications of the Study and Recommendations

Manner et al. [134] looked at the theoretical implications of mobile exam platforms from the perspective of three academic disciplines, namely sociology, technology, and pedagogy. The mobile exam as a means of supporting social inclusion needs outspoken principles on what is being learned as well as what counts as the effective outcomes [135]. It is also where the constructivist education theory comes in Nikou et al. [136]. The technological needs must be developed depending on a tested and educated understanding of the technical support of mobile examination platforms [137]. The practical implications of mobile examination platforms dictate the provision of perfect and safe testing grounds for different types of candidates [138]. Thus, additional theoretical research and more practical tests are needed to check the practicality and evaluate the performance and consequences of the applications. Mobile exam software must work dynamically to be user-friendly and to provide direct feedback to all candidates taking the test [139]. Educational institutions that wish to pursue the use of mobile phone examination platforms should invest greater amounts into developing system and service quality, and work intensively on enhancing information quality and the quality of exam content. Currently, thousands of educational institutions around the world are facing the COVID-19 pandemic and are under pressure from governments to commit to both online and blended learning. Based on this, it is evident that the hundreds of millions of students who can neither attend classes nor take part in traditional examinations would find the use of both electronic exams and mobile phone exam techniques an appropriate solution.
The introduction and increased use of mobile examination platforms by educational institutions serves to facilitate the teaching and examination processes. As opined by Al Masri [140], students can take the exams via their mobile phones at a time convenient to them, but nobody can check the exam process and evaluate the performance. The students might get the answers from the Internet, which may have an adverse effect on students’ true knowledge levels [36]. Therefore, the teachers must set questions for which the answers cannot be easily accessed on the Internet or in books [141]. Furthermore, it is crucial to be able to set a timer for each question so that students do not have sufficient time to search the Internet for the answer [142]. Mobile examination platforms are susceptible to fraud [14]. Technical errors may cause some difficulties in using mobile phone examination platforms. For example, a student who encounters some system failure or smartphone malfunction may miss sitting the exam [143], or there may be some difficulties in controlling and securing the exam environment.
In times of emergency and natural crisis, all governmental institutions find themselves under pressure to carry out their functions in the best way possible under the new imposed circumstances. At this time, educational institutions are being asked to take definite steps towards planning and applying mobile phone learning and examinations technologies [144]. At the onset of the current crisis in early 2020, the use and intention to use such technologies was still in the introductory stages, and additional investment is needed to enhance the mobile phone education and examination environment and culture [145]. This study seeks to provide both theoretical and empirical approaches to understanding the drivers behind the use of the main mobile phone exam platforms and highlighting which of these drivers need to be planned for and employed properly. Earlier studies confirmed the necessity to use such mobile phone examination systems and applications [39,49] and, these days, that necessity is greater than at any other time.

5.2. Research Limitations

This study was conducted to investigate the main factors affecting the intention to use mobile examination platforms by university students. This study is initial and can be classified as an exploratory study to check the suitability of using smartphones as a proper platform for conducting some students’ examinations. Employing smartphones as an examination platform is important to be validated and checked using mixed-method research approaches. Additionally, it is good to remember that using smartphones as an examination platform will not fit all examination levels (e.g., evaluation and criticizing) and might not be a good substitute for classical examination methods; however, they are worthy to be used and it is important to shine more light on how they can be used within academia. Accordingly, using smartphone examination platforms for testing higher education learning approaches, such as criticizing, evaluation, and even explanation, may be limited and seen as not appropriate from the instructors’ point of view. Lecturers and instructors use different examination methods to check their students’ understanding and knowledge. However, using such an approach for postgraduate students’ examinations needs to be checked in more detail, and its effectiveness should be tested with respect to different disciplines. A limited amount of primary data were collected for analysis. A large sample size is essential, especially to test the intention to use such platforms within different pedagogical settings. Future scholarly works regarding the use of mobile phone exam platforms could encompass a larger number of students over various levels of studies and disciplines, bearing in mind that openness to the use of mobile phone examination platforms can differ from one discipline to another. Thus, additional theoretical studies and real classroom applications are needed, especially to test the adoption of such platforms and their interrelated elements, which are system quality, information quality, and content quality. Additional factors that might be worth investigation are enjoyment and entertainment value and how such factors could potentially increase the intention, use, and repeat use of such new exam techniques. There remain obstacles to the comprehensive use of such platforms by a majority of students, considering that some students do not own a smartphone, and others may find it difficult to use the applications without help from others. However, the amount of research carried out on students’ orientation towards and their experiences with using such platforms is limited, and is a potential aspect to be addressed in later studies. Moreover, it is good to apply such research on a real examination setting such as quizzes, which rely more on using some simple examination methods such as true/false or multiple-choice questions. The next step is to analyze the practical results, instead of relying on the respondents’ feelings and thoughts using a Likert Scale. In other words, it is important to have practical results: this need to be investigated by using other methods, such as taking users’ and instructors’ view qualitatively and explore the findings using content analytical techniques to strengthen the use of a smartphone examination platform. To sum up, this study pioneers this issue, and more investigation is needed.

Author Contributions

Conceptualization, M.T.A. and S.A.S.; methodology, S.A.S.; software, B.A.K.; validation, A.Q.A., S.A., and A.D.; formal analysis, M.A.; investigation, R.M.; resources, M.T.A.; data curation, B.A.K.; writing—original draft preparation, S.A.; writing—review and editing, R.M.; visualization, S.A.S.; supervision, A.D.; project administration, A.Q.A.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We would like to express my appreciation to my co-authors for accepting to participate in this study, especially while they are working for five different universities, namely, the University of Sharjah, the University of Jordan, the Hashemite University, the Al-Balqa’ Applied University, and the Princess Sumaya University for Technology, which are located in The United Arab Emirates and Jordan. Without them, this paper would not have been published. I would like to thank the reviewers for their great comments and valuable feedback.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kemp, S. Every Single Stat You Need to Know about the Internet; Digital Trends: Portland, OR, USA, 2019. [Google Scholar]
  2. Ng’Ambi, D.; Knaggs, A. Using Mobile Phones for Exam Preparation. In Proceedings of the IADIS International Mobile Learning Conference, Algarve, Portgual, 11–13 April 2008; pp. 35–42. [Google Scholar]
  3. Nisbet, D.; Austin, D. Enhancing ESL Vocabulary Development through the Use of Mobile Technology. J. Adult Educ. 2013, 42, 1–7. [Google Scholar]
  4. Alshurideh, M.; Salloum, A.; Kurdi, B.; Azza, M. Understanding the Quality Determinants that Influence the Intention to Use the Mobile Learning Platforms: A Practical Study. Int. J. Interact. Mob. Technol. 2019, 13, 157–183. [Google Scholar] [CrossRef]
  5. Kuznekoff, J.H.; Titsworth, S. The Impact of Mobile Phone Usage on Student Learning. Commun. Educ. 2013, 62, 233–252. [Google Scholar] [CrossRef]
  6. Salloum, S.A.; Maqableh, W.; Mhamdi, C.; Al Kurdi, B.; Shaalan, K. Studying the Social Media Adoption by University Students in the United Arab Emirates. Int. J. Inf. Technol. Lang. Stud. 2018, 2, 83–95. [Google Scholar]
  7. Glahn, C.; Gruber, M.R.; Tartakovski, O. Beyond Delivery Modes and Apps: A Case Study on Mobile Blended Learning in Higher Education. In Design for Teaching and Learning in a Networked World; Springer: Berlin/Heidelberg, Germany, 2015; pp. 127–140. [Google Scholar]
  8. Rambe, P.; Bere, A. Using Mobile Instant Messaging to Leverage Learner Participation and Transform Pedagogy at a South African University of Technology. Br. J. Educ. Technol. 2013, 44, 544–561. [Google Scholar] [CrossRef]
  9. Habel, C.; Stubbs, M. Mobile Phone Voting for Participation and Engagement in a Large Compulsory Law Course. Res. Learn. Technol. 2014, 22. [Google Scholar] [CrossRef] [Green Version]
  10. Sayedalamin, Z.; Alshuaibi, A.; Almutairi, O.; Baghaffar, M.; Jameel, T.; Baig, M. Utilization of Smart Phones Related Medical ap-Plications among Medical Students at King Abdulaziz University, Jeddah: A Cross-Sectional Study. J. Infect. Public Health 2016, 9, 691–697. [Google Scholar] [CrossRef] [Green Version]
  11. Alhashmi, S.F.S.; Alshurideh, M.; Al Kurdi, B.; Salloum, S.A.; Alhashmi, S.F.S.; Alshurideh, M.; Al Kurdi, B.; Salloum, S.A. A Systematic Review of the Factors Affecting the Artificial Intelli-Gence Implementation in the Health Care Sector. In Joint European-US Workshop on Applications of Invariance in Computer Vision; Springer: Cham, Switzerland, 2020; pp. 37–49. [Google Scholar]
  12. Simonova, I. Mobile Devices in Technical and Engineering Education with Focus on ESP. Int. J. Interact. Mob. Technol. 2016, 10, 33. [Google Scholar] [CrossRef] [Green Version]
  13. Looi, C.-K.; Zhang, B.; Chen, W.; Seow, P.; Chia, G.; Norris, C.; Soloway, E. 1:1 Mobile Inquiry Learning Experience for Primary Science Students: A Study of Learning Effectiveness. J. Comput. Assist. Learn. 2010, 27, 269–287. [Google Scholar] [CrossRef]
  14. Karadeniz, S. Effects of Gender and Test Anxiety on Student Achievement in Mobile Based Assessment. Procedia-Soc. Behav. Sci. 2011, 15, 3173–3178. [Google Scholar] [CrossRef] [Green Version]
  15. Alzaza, N.S.; Yaakub, A.R. Students’ Awareness and Requirements of Mobile Learning Services in the Higher Education Envi-ronment. Am. J. Econ. Bus. Adm. 2011, 3, 95–100. [Google Scholar]
  16. Al Kurdi, B.; Alshurideh, M.; Salloum, S.A.; Obeidat, Z.M.; Al-Dweeri, R.M. An Empirical Investigation into Examination of Fac-tors Influencing University Students. Behavior towards Elearning Acceptance Using SEM Approach. Int. J. Interact. Mob. Technol. 2020, 14, 19–41. [Google Scholar] [CrossRef] [Green Version]
  17. Bursalioglu, O.; Murat, L.U.Y.; Volkan, A.; Erguzen, A. Mobile Device Supported Online Examination System Appropriate to Distance Learning. Int. E-J. Adv. Educ. 2016, 2, 95–104. [Google Scholar] [CrossRef] [Green Version]
  18. Salloum, S.A.; Shaalan, K. Adoption of E-Book for University Students. In Proceedings of the Advances in Human Factors, Business Management, Training and Education, Washington DC, USA, 24–28 July 2019; Metzler, J.B., Ed.; Springer: Cham, Switzerland, 2019; pp. 481–494. [Google Scholar]
  19. Al-Hakeem, M.S.; Abdulrahman, M.S. Developing a New e-Exam Platform to Enhance the University Academic Examinations: The Case of Lebanese French University. Int. J. Mod. Educ. Comput. Sci. 2017, 9, 9–16. [Google Scholar] [CrossRef] [Green Version]
  20. Wilkinson, K.; Barter, P. Do Mobile Learning Devices Enhance Learning in Higher Education Anatomy Classrooms? J. Pedagog. Dev. 2016, 6, 14–23. [Google Scholar]
  21. Dobbins, C.; Denton, P. MyWallMate: An Investigation into the use of Mobile Technology in Enhancing Student Engagement. TechTrends 2017, 61, 541–549. [Google Scholar] [CrossRef] [Green Version]
  22. Sung, Y.-T.; Chang, K.-E.; Liu, T.-C. The Effects of Integrating Mobile Devices with Teaching and Learning on Students’ Learning Performance: A Meta-Analysis and Research Synthesis. Comput. Educ. 2016, 94, 252–275. [Google Scholar] [CrossRef] [Green Version]
  23. Lim, W.N. Improving Student Engagement in Higher Education through Mobile-Based Interactive Teaching Model Using Socra-Tive. In Proceedings of the 2017 IEEE Global Engineering Education Conference (EDUCON), Athens, Greece, 25–28 April 2017; pp. 404–412. [Google Scholar]
  24. Nikou, S.A.; Economides, A.A. Acceptance of Mobile-Based Assessment from the Perspective of Self-Determination Theory of Motivation. In Proceedings of the 2014 IEEE 14th International Conference on Advanced Learning Technologies; Institute of Electrical and Electronics Engineers (IEEE), Athens, Greece, 7–10 July 2014; pp. 454–458. [Google Scholar]
  25. Lalitha, V.; Periasamy, J.K. Mobile Based Secured Student Online Exam System. Int. J. Eng. Technol. 2018, 7, 118–120. [Google Scholar] [CrossRef] [Green Version]
  26. Murphy, H.E. Digitalizing Paper-Based Exams: An Assessment of Programming Grading Assistant. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education, Seattle, WA, USA, 8–11 March 2017; pp. 775–776. [Google Scholar]
  27. Kaiiali, M.; Ozkaya, A.; Altun, H.; Haddad, H.; Alier, M. Designing a Secure Exam Management System (SEMS) for M-Learning Environments. IEEE Trans. Learn. Technol. 2016, 9, 258–271. [Google Scholar] [CrossRef] [Green Version]
  28. Shyshkanova, G.; Zaytseva, T.; Frydman, O. Mobile technologies make education a part of everyday life. Inf. Learn. Sci. 2017, 118, 570–582. [Google Scholar] [CrossRef]
  29. Katz, J.E. Mobile Phones in Educational Settings. In Magic in the Air; Routledge India: New Delhi, India, 2017; pp. 87–101. [Google Scholar]
  30. Han, I.; Shin, W.S. The Use of a Mobile Learning Management System and Academic Achievement of Online Students. Comput. Educ. 2016, 102, 79–89. [Google Scholar] [CrossRef]
  31. De Witt, C.; Gloerfeld, C. Mobile Learning and Higher Education. In The Digital Turn in Higher Education; Springer: Berlin/Heidelberg, Germany, 2018; pp. 61–79. [Google Scholar]
  32. Kadam, A.J.; Singh, A.; Jagtap, K.; Tankala, S. Mobile Web Based Android Application for College Management System. Int. J. Eng. Comput. Sci. 2017, 6, 20206–20209. [Google Scholar]
  33. Joo, Y.J.; Kim, N.; Kim, N.H. Factors Predicting Online University Students’ Use of a Mobile Learning Management System (m-LMS). Educ. Technol. Res. Dev. 2016, 64, 611–630. [Google Scholar] [CrossRef]
  34. Zahay, D.; Kumar, A.; Trimble, C. Motivation and Active Learning to Improve Student Performance: An Extended Abstract. In Developments in Marketing Science: Proceedings of the Academy of Marketing Science; Metzler, J.B., Ed.; Springer: Cham, Switzerland, 2017; pp. 1259–1263. [Google Scholar]
  35. Chang, R.-C.; Yang, C.-Y. Developing a Mobile App for Game-Based Learning in Middle School Mathematics Course. In Proceedings of the 2016 International Conference on Applied System Innovation (ICASI), Institute of Electrical and Electronics Engineers (IEEE), Okinawa, Japan, 26–30 May 2016; pp. 1–2. [Google Scholar]
  36. Basoglu, E.B.; Akdemir, O. A Comparison of Undergraduate Students’ English Vocabulary Learning: Using Mobile Phones and Flash Cards. Turk. Online J. Educ. Technol. 2010, 9, 1–7. [Google Scholar]
  37. Fayyoumi, A.; Mohammad, H.; Faris, H. Mobile Based Learning and Examination: Students and Instructors Perceptions from Different Arab Countries. J. Softw. Eng. Appl. 2013, 6, 662–669. [Google Scholar] [CrossRef] [Green Version]
  38. García Laborda, J.; Magal Royo, T.; Litzler, M.F.; Gimenez Lopez, J.L. Mobile Phones for Spain’s University Entrance Examina-tion Language Test. J. Educ. Technol. Soc. 2014, 17, 17–30. [Google Scholar]
  39. Alshurideh, M.; Al Kurdi, B.; Salloum, S. Examining the Main Mobile Learning System Drivers’ Effects: A Mix Empirical Examina-tion of Both the Expectation-Confirmation Model (ECM) and the Technology Acceptance Model (TAM). In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics, Cairo, Egypt, 26–28 October 2019; pp. 406–417. [Google Scholar]
  40. Aghazamani, A. How do University Students Spend their Time on Facebook? An Exploratory Study. J. Am. Sci. 2010, 6, 730–735. [Google Scholar]
  41. Zolkepli, I.A.; Kamarulzaman, Y. Social Media Adoption: The Role of Media Needs and Innovation Characteristics. Comput. Hum. Behav. 2015, 43, 189–209. [Google Scholar] [CrossRef]
  42. Akar, E.; Mardikyan, S. Analyzing Factors Affecting Users’ Behavior Intention to Use Social Media: Twitter Case. Int. J. Bus. Soc. Sci. 2014, 5, 85–95. [Google Scholar]
  43. Sife, A.; Lwoga, E.; Sanga, C. New Technologies for Teaching and Learning: Challenges for Higher Learning Institutions in Devel-Oping Countries. Int. J. Educ. Dev. Using ICT 2007, 3, 57–67. [Google Scholar]
  44. Wiid, J.; Cant, M.C.; Nell, C. Open Distance Learning Students Perception of the Use of Social Media Networking Systems as an Educational Tool. Int. Bus. Econ. Res. J. (IBER) 2013, 12, 867. [Google Scholar] [CrossRef]
  45. Salloum, S.A.S.; Shaalan, K. Investigating Students’ Acceptance of E-Learning System in Higher Educational Environments in the UAE: Applying the Extended Technology Acceptance Model (TAM); The British University in Dubai: Dubai, United Arab Emirates, 2018. [Google Scholar]
  46. Evans, J.R.; Lindsay, W.M. The Management and Control of Quality 4th Edition South; Westernllege College Publishing: Cincinnati, OH, USA, 1999. [Google Scholar]
  47. Salloum, S.A.; Alhamad, A.Q.M.; Al-Emran, M.; Monem, A.A.; Shaalan, K. Exploring Students’ Acceptance of E-Learning Through the Development of a Comprehensive Technology Acceptance Model. IEEE Access 2019, 7, 128445–128462. [Google Scholar] [CrossRef]
  48. Al-Emran, M.; Mezhuyev, V.; Kamaludin, A.; AlSinani, M. Development of M-learning Application Based on Knowledge Man-Agement Processes. In Proceedings of the 2018 7th International conference on Software and Computer Applications (ICSCA 2018), Kuantan, Malaysia, 8–10 February 2018. [Google Scholar]
  49. Abuhashesh, M.; Al-Khasawneh, M.; Al-Dmour, R.; Masa’Deh, R. The Impact of Facebook on Jordanian Consumers’ Decision Process in the Hotel Selection. IBIMA Bus. Rev. 2019, 1–16. [Google Scholar] [CrossRef]
  50. Wang, Y.-S.; Wang, H.-Y.; Shee, D.Y. Measuring e-Learning Systems Success in an Organizational Context: Scale Development and Validation. Comput. Hum. Behav. 2007, 23, 1792–1808. [Google Scholar] [CrossRef]
  51. Al-Qaysi, N.; Al-Emran, M. Code-Switching Usage in Social Media: A Case Study from Oman. Int. J. Inf. Technol. Lang. Stud. 2017, 1, 25–38. [Google Scholar]
  52. Al Mazroa, M.; Gulliver, S. Understanding the Usage of Mobile Payment Systems—The Impact of Personality on the Continuance Usage. In Proceedings of the 2018 4th International Conference on Information Management (ICIM); Institute of Electrical and Electronics Engineers (IEEE), England, UK, 25–27 May 2018; pp. 188–194. [Google Scholar]
  53. Bates, A.W.; Bates, T. Technology, E-Learning and Distance Education; Psychology Press: Hove, UK, 2005. [Google Scholar]
  54. Chang, C.-C.; Hung, S.-W.; Cheng, M.-J.; Wu, C.-Y. Exploring the Intention to Continue Using Social Networking Sites: The Case of Facebook. Technol. Forecast. Soc. Chang. 2015, 95, 48–56. [Google Scholar] [CrossRef]
  55. Bhattacherjee, A. Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Q. 2001, 25, 351–370. [Google Scholar] [CrossRef]
  56. Chen, C.-F. Investigating Structural Relationships between Service Quality, Perceived Value, Satisfaction, and Behavioral Inten-Tions for Air Passengers: Evidence from Taiwan. Transp. Res. Part A Policy Pract. 2008, 42, 709–717. [Google Scholar] [CrossRef]
  57. Chou, H.-K.; Lin, I.-C.; Woung, L.-C.; Tsai, M.-T. Engagement in E-Learning Opportunities: An Empirical Study on Patient Educa-Tion Using Expectation Confirmation Theory. J. Med. Syst. 2012, 36, 1697–1706. [Google Scholar] [CrossRef] [PubMed]
  58. Dumpit, D.Z.; Fernandez, C.J. Analysis of the Use of Social Media in Higher Education Institutions (HEIs) Using the Technology Acceptance Model. Int. J. Educ. Technol. High. Educ. 2017, 14, 5. [Google Scholar] [CrossRef]
  59. Pituch, K.A.; Lee, Y.-K. The Influence of System Characteristics on E-Learning Use. Comput. Educ. 2006, 47, 222–244. [Google Scholar] [CrossRef]
  60. Freeze, R.D.; Alshare, K.A.; Lane, P.L.; Wen, H.J. IS Success Model in E-Learning Context Based on Students’ Perceptions. J. Inf. Syst. Educ. 2010, 21, 173–2010. [Google Scholar]
  61. Gachago, D.; Ivala, E. Social Media for Enhancing Student Engagement: The Use of Facebook and Blogs at a University of Tech-Nology. S. Afr. J. High. Educ. 2012, 26, 152–167. [Google Scholar]
  62. Habes, M.; Alghizzawi, M.; Khalaf, R.; Salloum, S.A.; Ghani, M.A. The Relationship between Social Media and Academic Per-Formance: Facebook Perspective. Int. J. Inf. Technol. Lang. Stud. 2018, 2, 12–18. [Google Scholar]
  63. Thong, J.Y.L.; Hong, S.-J.; Tam, K.Y. The Effects of Post-Adoption Beliefs on the Expectation-Confirmation Model for Infor-Mation Technology Continuance. Int. J. Hum. Comput. Stud. 2006, 64, 799–810. [Google Scholar] [CrossRef]
  64. Obeidat, B.Y.; Sweis, R.J.; Zyod, D.S.; Masa’Deh, R.; Moh’D, T.; Alshurideh, M. The Effect of Perceived Service Quality on Customer Loyalty in Internet Service Providers in Jordan. J. Manag. Res. 2012, 4, 224–242. [Google Scholar] [CrossRef] [Green Version]
  65. Zu’bi, Z.; Al-Lozi, M.; Dahiyat, S.; Alshurideh, M.; Al Majali, A. Examining the Effects of Quality Management Practices on Prod-Uct Variety. Eur. J. Econ. Financ. Adm. Sci. 2012, 51, 123–139. [Google Scholar]
  66. Hair, J.; Hollingsworth, C.L.; Randolph, A.B.; Chong, A.Y.L. An Updated and Expanded Assessment of PLS-SEM in Information Systems Research. Ind. Manag. Data Syst. 2017, 117, 442–458. [Google Scholar] [CrossRef]
  67. Davis, F.D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef] [Green Version]
  68. Prestridge, S. A Focus on Students’ Use of Twitter–Their Interactions with Each Other, Content and Interface. Act. Learn. High. Educ. 2014, 15, 101–115. [Google Scholar] [CrossRef] [Green Version]
  69. Henseler, J.; Ringle, C.M.; Sarstedt, M. A New Criterion for Assessing Discriminant Validity in Variance-Based Structural Equa-Tion Modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef] [Green Version]
  70. McKinney, V.; Yoon, K.F.; Mariam, Z. The Measurement of Web-Customer Satisfaction: An Expectation and Disconfir-Mation Approach. Inf. Syst. Res. 2002, 13, 296–315. [Google Scholar] [CrossRef]
  71. Keller, C.; Cernerud, L. Students’ Perceptions of E-Learning in University Education. J. Educ. Media 2002, 27, 55–67. [Google Scholar] [CrossRef]
  72. Koper, R. Use of the Semantic Web to Solve Some Basic Problems in Education: Increase Flexible, Distributed Lifelong Learning; Decrease Teacher’s Workload. J. Interact. Media Educ. 2004, 2004, 5. [Google Scholar] [CrossRef] [Green Version]
  73. Palmer, S. Characterisation of the use of Twitter by Australian Universities. J. High. Educ. Policy Manag. 2013, 35, 333–344. [Google Scholar] [CrossRef] [Green Version]
  74. Lee, M.J.W.; Chan, A. Pervasive, Lifestyle-Integrated Mobile Learning for Distance Learners: An Analysis and Unexpected Re-Sults from a Podcasting Study. Open Learn 2007, 22, 201–218. [Google Scholar] [CrossRef]
  75. Lieberman, Playfulness: Its Relationship to Imagination and Creativity; Academic Press: Cambridge, MA, USA, 2014.
  76. Tam, C.; Santos, D.; Oliveira, T. Exploring the Influential Factors of Continuance Intention to Use Mobile Apps: Extending the Expectation Confirmation Model. Inf. Syst. Front. 2020, 22, 243–257. [Google Scholar] [CrossRef]
  77. Lin, C.S.; Wu, S.; Tsai, R.J. Integrating Perceived Playfulness into Expectation-Confirmation Model for Web Portal Context. Inf. Manag. 2005, 42, 683–693. [Google Scholar] [CrossRef]
  78. Lin, J.C.-C.; Lu, H. Towards an Understanding of the Behavioural Intention to Use a Web Site. Int. J. Inf. Manag. 2000, 20, 197–208. [Google Scholar] [CrossRef]
  79. Nikou, S.A.; Economides, A.A. Mobile-Based Assessment: Investigating the Factors that Influence Behavioral Intention to Use. Comput. Educ. 2017, 109, 56–73. [Google Scholar] [CrossRef]
  80. Liu, Y.; Li, H.; Carlsson, C. Factors Driving the Adoption of m-Learning: An Empirical Study. Comput. Educ. 2010, 55, 1211–1219. [Google Scholar] [CrossRef]
  81. Joo, Y.J.; Lim, K.Y. Investigating the Structural Relationship Among Perceived Innovation Attributes, Intention to Use and Actual Use of Mobile Learning in an Online University in South Korea. Australas. J. Educ. Technol. 2014, 30, 427–439. [Google Scholar] [CrossRef]
  82. Jeffrey Mingle, D.M.A. Social Media Network Participation and Academic Performance In Senior High Schools in Ghan. Libr. Philos. Pract. 2015, 1286, 7–21. [Google Scholar]
  83. Padilla-Meléndez, A.; del Aguila-Obra, A.R.; Garrido-Moreno, A. Perceived Playfulness, Gender Differences and Technology Acceptance Model in a Blended Learning Scenario. Comput. Educ. 2013, 63, 306–317. [Google Scholar] [CrossRef]
  84. Levy, Y.; Ramim, M.M.; Furnell, S.M.; Clarke, N.L. Comparing Intentions to Use University-Provided vs Vendor-Provided Multibiometric Authentication in Online Exams. Campus-Wide Inf. Syst. 2011, 28, 102–113. [Google Scholar] [CrossRef]
  85. Krejcie, R.V.; Morgan, D.W. Determining Sample Size for Research Activities. Educ. Psychol. Meas. 1970, 30, 607–610. [Google Scholar] [CrossRef]
  86. Chuan, C.L.; Penyelidikan, J. Sample Size Estimation Using Krejcie and Morgan and Cohen Statistical Power Analysis: A Com-Parison. J. Penyelid. IPBL 2006, 7, 78–86. [Google Scholar]
  87. Hair, J.F., Jr.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM); Sage Publications: Thousand Oaks, CA, USA, 2016. [Google Scholar]
  88. Rym, B.; Olfa, B.; Mélika, B.M. Determinants of E-Learning Acceptance: An Empirical Study in the Tunisian Context. Am. J. Ind. Bus. Manag. 2013, 3, 307–321. [Google Scholar] [CrossRef] [Green Version]
  89. Almaiah, M.A.; Jalil, M.A.; Man, M. Extending the TAM to Examine the Effects of Quality Features on Mobile Learning Ac-Ceptance. J. Comput. Educ. 2016, 3, 453–485. [Google Scholar] [CrossRef]
  90. Cheng, Y.-M. Antecedents and Consequences of E-Learning Acceptance. Inf. Syst. J. 2010, 21, 269–299. [Google Scholar] [CrossRef]
  91. Alsabawy, A.Y.; Cater-Steel, A.; Soar, J. Determinants of Perceived Usefulness of E-Learning Systems. Comput. Hum. Behav. 2016, 64, 843–858. [Google Scholar] [CrossRef]
  92. Aparicio, M.; Bacao, F.; Oliveira, T. Grit in the Path to E-Learning Success. Comput. Hum. Behav. 2017, 66, 388–399. [Google Scholar] [CrossRef]
  93. Roca, J.C.; Chiu, C.-M.; Martínez, F.J. Understanding e-Learning Continuance Intention: An Extension of the Technology Ac-Ceptance Model. Int. J. Hum. Comput. Stud. 2006, 64, 683–696. [Google Scholar] [CrossRef] [Green Version]
  94. Mohammadi, H. Investigating Users’ Perspectives on E-Learning: An Integration of TAM and IS Success Model. Comput. Hum. Behav. 2015, 45, 359–374. [Google Scholar] [CrossRef]
  95. Chang, C.-T.; Hajiyev, J.; Su, C.-R. Examining the Students’ Behavioral Intention to Use e-Learning in Azerbaijan? The General Extended Technology Acceptance Model for E-learning Approach. Comput. Educ. 2017, 111, 128–143. [Google Scholar] [CrossRef]
  96. Fathema, N.; Shannon, D.; Ross, M. Expanding the Technology Acceptance Model (TAM) to Examine Faculty Use of Learning Management Systems (LMSs) In Higher Education Institutions. J. Online Learn. Teach. 2015, 11, 210–232. [Google Scholar]
  97. Marjanovic, U.; Delić, M.; Lalic, B. Developing a Model to Assess the Success of E-learning systems: Evidence from a manufac-turing company in transitional Economy. Inf. Syst. E-Bus. Manag. 2016, 14, 253–272. [Google Scholar] [CrossRef]
  98. Nunnally, J.C. Psychometric Theory, 2nd ed.; McGraw-Hill: New York, NY, USA, 1978. [Google Scholar]
  99. Al-Emran, M.; Salloum, S.A. Students’ Attitudes Towards the Use of Mobile Technologies in e-Evaluation. Int. J. Interact. Mob. Technol. 2017, 11, 195–202. [Google Scholar] [CrossRef]
  100. Ringle, C.M.; Wende, S.; Becker, J.-M. SmartPLS 3. Bönningstedt: SmartPLS. 2015. Available online: https://www.smartpls.com/ (accessed on 29 March 2021).
  101. Urbach, N.; Ahlemann, F. Structural Equation Modeling in Information Systems Research Using Partial Least Squares. J. Inf. Technol. Theory Appl. 2010, 11, 5–40. [Google Scholar]
  102. Goodhue, D.L.; Lewis, W.; Thompson, R. Does PLS Have Adavantages for Small Sample Size or Non-Normal Data? MIS Quaterly 2012, 36, 981–1001. [Google Scholar] [CrossRef] [Green Version]
  103. Barclay, D.; Higgins, C.; Thompson, R. The Partial Least Squares (PLS) Approach to Casual Modeling: Personal Computer Adoption Ans Use as an Illustration. Technol. Stud. 1995, 2, 285–324. [Google Scholar]
  104. Nunnally, J.C.; Bernstein, I.H. Psychometric Theory, 3rd ed.; McGraw-Hill: New York, NY, USA, 1994. [Google Scholar]
  105. Kline, R.B. Principles and Practice of Structural Equation Modeling, 4th ed.; Guilford Publications: New York, NY, USA, 2015. [Google Scholar]
  106. Dijkstra, T.K.; Henseler, J. Consistent and asymptotically normal PLS estimators for linear structural equations. Comput. Stat. Data Anal. 2015, 81, 10–23. [Google Scholar] [CrossRef] [Green Version]
  107. Hair, J.F., Jr.; Ringle, C.M.; Sarstedt, M. PLS-SEM: Indeed a Silver Bullet. J. Mark. Theory Pr. 2011, 19, 139–152. [Google Scholar] [CrossRef]
  108. Henseler, J.; Ringle, C.M.; Sinkovics, R.R. The Use of Partial Least Squares Path Modeling in International Marketing. In New Challenges to International Marketing; Emerald Group Publishing Limited: Bingley, UK, 2009; Volume 20, pp. 277–319. [Google Scholar]
  109. Fornell, C.; Larcker, D.F. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J. Mark. Res. 1981, 18, 39. [Google Scholar] [CrossRef]
  110. Trial, D. Model Fit. Available online: https://www.smartpls.com/documentation/algorithms-and-techniques/model-fit (accessed on 29 March 2021).
  111. Hu, L.; Bentler, P.M. Fit Indices in Covariance Structure Modeling: Sensitivity to Underparameterized Model Misspecifi-Cation. Psychol. Methods 1998, 3, 424. [Google Scholar] [CrossRef]
  112. Bentler, P.M.; Bonett, D.G. Significance Tests and Goodness of Fit in the Analysis of Covariance Structures. Psychol. Bull. 1980, 88, 588. [Google Scholar] [CrossRef]
  113. Lohmöller, J.B. Latent Variable Path Modeling with Partial Least Squares; Physica-Verlag: Heidelberg, Germany, 1989. [Google Scholar]
  114. Henseler, J.; Theo, K.; Marko, S.; Christian, M.R.; Adamantios, D.; Detmar, W.S.; David, J.K.J.; Joseph, F.H.; Tomas, M.G.; Hult, R.J.C. Common Beliefs and Reality About PLS: Comments on Rönkkö and Evermann. Organ. Res. Methods 2013, 17, 182–209. [Google Scholar] [CrossRef] [Green Version]
  115. Milošević, I.; Živković, D.; Manasijević, D.; Nikolić, D. The Effects of the Intended Behavior of Students in the Use of M-learning. Comput. Hum. Behav. 2015, 51, 207–215. [Google Scholar] [CrossRef]
  116. Salloum, S.A.; Mhamdi, C.; Al Kurdi, B.; Shaalan, K. Factors Affecting the Adoption and Meaningful Use of Social Media: A Structural Equation Modeling Approach. Int. J. Inf. Technol. Lang. Stud. 2018, 2, 96–109. [Google Scholar]
  117. Alhashmi, S.F.S.; Salloum, S.A.; Abdallah, S. Critical Success Factors for Implementing Artificial Intelligence (AI) Projects in Dubai Government United Arab Emirates (UAE) Health Sector: Applying the Extended Technology Acceptance Model (TAM). In Artificial Intelligence, Computer and Software Engineering Advances; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2019; Volume 1058, pp. 393–405. [Google Scholar]
  118. Al-Maroof, S.S.A.R.S. An Integrated Model of Continuous Intention to Use of Google Classroom. In Recent Advances in Intelligent Systems and Smart Applications; Al-Emran, M., Shaa-lan, K., Hassanien, A., Eds.; Springer: Cham, Switzerland, 2021; Volume 295. [Google Scholar]
  119. Aburayya, A.; Alshurideh, M.; Al Marzouqi, A.; Al Diabat, O.; Alfarsi, A.; Suson, R.; Bash, M.; Salloum, S.A. An Empirical Examination of the Effect of TQM Practices on Hospital Service Quality: An Assessment Study in UAE Hospitals. Syst. Rev. Pharm. 2020, 11, 347–362. [Google Scholar]
  120. Saeed Al-Maroof, R.; Alhumaid, K.; Salloum, S. The Continuous Intention to Use E-Learning, from Two Different Per-Spectives. Educ. Sci. 2021, 11, 6. [Google Scholar] [CrossRef]
  121. Chin, W.W. The Partial Least Squares Approach to Structural Equation Modeling. Mod. Methods Bus. Res. 1998, 295, 295–336. [Google Scholar]
  122. Dreheeb, A.E.; Basir, N.; Fabil, N. Impact of System Quality on Users’ Satisfaction in Continuation of the Use of e-Learning System. Int. J. e-Educ. e-Bus. e-Manag. e-Learn. 2016, 6, 13. [Google Scholar] [CrossRef] [Green Version]
  123. Senapathi, M.; Srinivasan, A. An Empirical Investigation of the Factors Affecting Agile Usage. In Proceedings of the 18th International Conference on Hybrid Systems: Computation and Control, London, UK, 13–14 May 2014; ACM: London, UK; pp. 1–10. [Google Scholar]
  124. Al-Dweeri, R.M.; Obeidat, Z.M.; Al-Dwiry, M.A.; Alshurideh, M.T.; Alhorani, A.M. The Impact of E-Service Quality and E-Loyalty on Online Shopping: Moderating Effect of E-Satisfaction and E-Trust. Int. J. Mark. Stud. 2017, 9, 92. [Google Scholar] [CrossRef] [Green Version]
  125. Al Dmour, H.; Alshurideh, M.; Shishan, F. The Influence of Mobile Application Quality and Attributes on the Continuance Intention of Mobile Shopping. Life Sci. J. 2014, 11, 172–181. [Google Scholar]
  126. Akour, I.; Alshurideh, M.; Al Kurdi, B.; Al Ali, A.; Salloum, S. Using Machine Learning Algorithms to Predict People’s In-Tention to Use Mobile Learning Platforms During the COVID-19 Pandemic: Machine Learning Approach. JMIR Med. Educ. 2021, 7, 1–17. [Google Scholar] [CrossRef]
  127. Bacca-Acosta, J.; Avila-Garzon, C. Student Engagement with Mobile-Based Assessment Systems: A Survival Analysis. J. Comput. Assist. Learn. 2021, 37, 158–171. [Google Scholar] [CrossRef]
  128. Alsharari, N.M.; Alshurideh, M.T. Student Retention in Higher Education: The Role of Creativity, Emotional Intelligence and Learner Autonomy. Int. J. Educ. Manag. 2021, 35, 233–247. [Google Scholar] [CrossRef]
  129. Alshamsi, A.; Alshurideh, M.; Al Kurdi, B.; Salloum, S.A. The Influence of Service Quality on Customer Retention: A Systematic Review in the Higher Education. In Proceedings of the Intelligent and Fuzzy Techniques in Big Data Analytics and Decision Making; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 2021; pp. 404–416. [Google Scholar]
  130. Alshurideh, M.; Al Kurdi, B.; Salloum, S.A.; Arpaci, I.; Al-Emran, M. Predicting the Actual Use of M-Learning Systems: A Comparative approach Using PLS-SEM and Machine Learning Algorithms. Interact. Learn. Environ. 2020, 1–15. [Google Scholar] [CrossRef]
  131. Liu, Y.; Han, S.; Li, H. Understanding the Factors Driving M-Learning Adoption: A Literature Review. Campus-Wide Inf. Syst. 2010, 27, 210–226. [Google Scholar] [CrossRef] [Green Version]
  132. Day, L.; Smith, E.L. Integrating Quality and Safety Content into Clinical Teaching in the Acute Care Setting. Nurs. Outlook 2007, 55, 138–143. [Google Scholar] [CrossRef] [PubMed]
  133. Gorla, N.; Somers, T.M.; Wong, B. Organizational Impact of System Quality, Information Quality, and Service Quality. J. Strat. Inf. Syst. 2010, 19, 207–228. [Google Scholar] [CrossRef]
  134. Manner, J.; Nienaber, D.; Schermann, M.; Krcmar, H. Six Principles for Governing Mobile Platforms. In Proceedings of the 11th Interna-Tional Conference on Wirtschaftsinformatik, Leipzig, Germany, 27 February–1 March 2013; pp. 1375–1389. [Google Scholar]
  135. Nikou, S.A.; Economides, A.A. The Impact of Paper-Based, Computer-Based and Mobile-Based Self-Assessment on Students’ Science Motivation and Achievement. Comput. Hum. Behav. 2016, 55, 1241–1248. [Google Scholar] [CrossRef]
  136. Nikou, S.A.; Economides, A.A. Student Achievement in Paper, Computer/Web and Mobile Based Assessment. In Proceedings of the 6th Balkan Conference in Informatics, BCI-LOCAL 2013, Thessaloniki, Greece, 19–21 September 2013; pp. 107–114. [Google Scholar]
  137. Tufekci, A.; Ekinci, H.; Kose, U. Development of an Internet-Based Exam System for Mobile Environments and Evalua-tion of its Usability. Mevlana Int. J. Educ. 2013, 3, 57–74. [Google Scholar] [CrossRef]
  138. Romero, C.; Ventura, S.; De Bra, P. Using Mobile and Web-Based Computerized Tests to Evaluate University Students. Comput. Appl. Eng. Educ. 2009, 17, 435–447. [Google Scholar] [CrossRef] [Green Version]
  139. Tan, T.-H.; Liu, T.-Y. The Mobile-Based Interactive Learning Environment (MOBILE) and a Case Study for Assisting Ele-Mentary School English Learning. In Proceedings of the IEEE International Conference on Advanced Learning Technologies, Joensuu, Finland, 30 August–1 September 2004; pp. 530–534. [Google Scholar]
  140. Al Masri, A. Using Mobile Phone for Assessing University Students in English Literature in Jordan. Eur. Sci. J. 2012, 8, 195–206. [Google Scholar]
  141. Coulby, C.; Hennessey, S.; Davies, N.; Fuller, R. The Use of Mobile Technology for Work-Based Assessment: The Student Experience. Br. J. Educ. Technol. 2011, 42, 251–265. [Google Scholar] [CrossRef]
  142. Georgieva, E. A Comparison Analysis of Mobile Learning Systems. Commun. Cogn. 2007, 40, 1–6. [Google Scholar]
  143. Hwang, G.-J.; Chang, H.-F. A Formative Assessment-Based Mobile Learning Approach to Improving the Learning Atti-Tudes and Achievements of Students. Comput. Educ. 2011, 56, 1023–1031. [Google Scholar] [CrossRef]
  144. Sinclair, M.; Education in Emergencies. Learning for A Future: Refugee Education in Developing Countries. 2001, pp. 1–84. Available online: https://www.unhcr.org/3b8a1ba94.pdf (accessed on 29 March 2021).
  145. van Tryon, P.J.S.; Bishop, M.J. Theoretical Foundations for Enhancing Social Connectedness in Online Learning Environments. Distance Educ. 2009, 30, 291–315. [Google Scholar] [CrossRef]
Figure 1. Research Model.
Figure 1. Research Model.
Informatics 08 00032 g001
Figure 2. Hypotheses’ testing results. * p < 0.05, ** p < 0.01.
Figure 2. Hypotheses’ testing results. * p < 0.05, ** p < 0.01.
Informatics 08 00032 g002
Table 1. Constructs and their sources.
Table 1. Constructs and their sources.
ConstructsNumber of ItemsSource
INT2[4]
CONT4[88,89,90]
INF4[91,92,93,94]
PEOU4[88,95,96]
PU4[88,91,95,96]
SYS4[91,92,93,96,97]
SERV4[90,91,92,94]
Note: INT = intention to use mobile examination platforms; CONT = content quality; INF = Information quality; PEOU = perceived ease of use; PU = perceived usefulness; SYS = quality of the system; SERV = service quality.
Table 2. The pilot study.
Table 2. The pilot study.
ConstructCronbach’s Alpha
INT0.868
CONT0.882
INF0.829
PEOU0.799
PU0.836
SYS0.890
SERV0.845
Note: INT = intention to use mobile examination platforms; CONT = content quality; INF = Information quality; PEOU = perceived ease of use; PU = perceived usefulness; SYS = quality of the system; SERV = service quality.
Table 3. Respondents’ demographic data.
Table 3. Respondents’ demographic data.
CriterionFactorFrequencyPercentage
GenderFemale27148%
Male29552%
Age18–2932057%
30–3919034%
40–49488%
50–5981%
CollegeCollege of Business Administration24243%
College of Humanities and Social Sciences356%
College of Engineering and Information Technology13023%
College of General Education519%
College of Mass Communication and Public Relations10819%
Education qualificationBachelor’s39570%
Master’s10519%
Doctorate6611%
Table 4. Convergent validity results that assure acceptable values (Factor loading, Cronbach’s Alpha, composite reliability, Dijkstra-Henseler’s rho ≥ 0.70 and AVE > 0.5).
Table 4. Convergent validity results that assure acceptable values (Factor loading, Cronbach’s Alpha, composite reliability, Dijkstra-Henseler’s rho ≥ 0.70 and AVE > 0.5).
ConstructsItemsFactor
Loading
Cronbach’s AlphaCRPAAVE
Intention to use mobile exam platformsINT10.7990.8150.8280.8210.625
INT20.728
Content qualityCONT10.7580.7180.7550.7800.661
CONT20.865
CONT30.859
CONT40.796
Information qualityINF10.8390.7530.8010.7980.650
INF20.887
INF30.740
INF40.822
Perceived Ease of UsePEOU10.7300.8690.8190.8360.612
PEOU20.777
PEOU30.885
PEOU40.848
Perceived UsefulnessPU10.7990.8520.9030.8940.709
PU20.868
PU30.912
PU40.820
Quality of the systemSYS10.7600.8060.8870.8890.598
SYS20.850
SYS30.884
SYS40.826
Service qualitySERV10.7310.8970.8390.8420.741
SERV20.882
SERV30.851
SERV40.844
Table 5. Fornell–Larker Scale.
Table 5. Fornell–Larker Scale.
INTCONTINFPEOUPUSYSSERV
INT0.798 *
CONT0.4300.852
INF0.5180.4590.817
PEOU0.5140.6000.5280.832
PU0.2680.2250.4580.3360.859
SYS0.3280.1580.3160.1250.1580.874
SERV0.5200.1050.4440.5400.4870.2300.785
Note: INT = intention to use mobile exam platforms; CONT = content quality; INF = Information quality; PEOU = perceived ease of use; PU = perceived usefulness; SYS = quality of the system; SERV = service quality. * Diagonals (bold values) represent the square root of average variance extracted, and the other matrix entries are the factor correlation.
Table 6. Heterotrait–Monotrait Ratio (HTMT).
Table 6. Heterotrait–Monotrait Ratio (HTMT).
INTCONTINFPEOUPUSYSSERV
INT
CONT0.200
INF0.6520.698
PEOU0.5500.6050.408
PU0.3910.3000.3990.105
SYS0.2050.5740.4980.6180.501
SERV0.2990.5050.3450.7000.5440.229
Note: INT = intention to use mobile examination platforms; CONT = content quality; INF = Information quality; PEOU = perceived ease of use; PU = perceived usefulness; SYS = quality of the system; SERV = service quality.
Table 7. Model fit indicators.
Table 7. Model fit indicators.
CriteriaComplete Model
Saturated ModelEstimated Mod
SRMR0.0420.050
d_ULS0.8952.408
d_G0.6770.626
Chi-Square470.827482.459
NFI0.7150.738
RMS Theta0.082
Table 8. R2 of the endogenous latent variables.
Table 8. R2 of the endogenous latent variables.
ConstructsR2Results
INT0.726High
PEOU0.719High
PU0.708High
Note: INT = intention to use mobile examination platforms; PEOU = perceived ease of use; PU = perceived usefulness.
Table 9. Results of structural model examination (significant at * p < 0.05, ** p < 0.01).
Table 9. Results of structural model examination (significant at * p < 0.05, ** p < 0.01).
HypothesisRelationshipPathT-ValuePath CoefficientResult
H1aSYS -> PEOU0.43624.635+0.000Accepted **
H1bSYS -> PU0.28718.009+0.000Accepted **
H2aINF -> PEOU0.76915.546+0.000Accepted **
H2bINF -> PU0.33510.222+0.000Accepted **
H3aCONT -> PEOU0.1582.521+0.022Accepted *
H3bCONT -> PU0.7899.445+0.003Accepted **
H4aSERV -> PEOU0.3181.630+0.026Accepted *
H4bSERV -> PU0.53113.780+0.000Accepted **
H5PEOU -> PU0.26211.248+0.000Accepted **
H6PEOU -> INT0.48713.990+0.000Accepted **
H7PU -> INT0.36610.201+0.001Accepted **
Note: INT = intention to use mobile examination platforms; CONT = content quality; INF = Information quality; PEOU = perceived ease of use; PU = perceived usefulness; SYS = quality of the system; SERV = service quality.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Alshurideh, M.T.; Al Kurdi, B.; AlHamad, A.Q.; Salloum, S.A.; Alkurdi, S.; Dehghan, A.; Abuhashesh, M.; Masa’deh, R. Factors Affecting the Use of Smart Mobile Examination Platforms by Universities’ Postgraduate Students during the COVID-19 Pandemic: An Empirical Study. Informatics 2021, 8, 32. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8020032

AMA Style

Alshurideh MT, Al Kurdi B, AlHamad AQ, Salloum SA, Alkurdi S, Dehghan A, Abuhashesh M, Masa’deh R. Factors Affecting the Use of Smart Mobile Examination Platforms by Universities’ Postgraduate Students during the COVID-19 Pandemic: An Empirical Study. Informatics. 2021; 8(2):32. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8020032

Chicago/Turabian Style

Alshurideh, Muhammad Turki, Barween Al Kurdi, Ahmad Qasim AlHamad, Said A. Salloum, Shireen Alkurdi, Ahlam Dehghan, Mohammad Abuhashesh, and Ra’ed Masa’deh. 2021. "Factors Affecting the Use of Smart Mobile Examination Platforms by Universities’ Postgraduate Students during the COVID-19 Pandemic: An Empirical Study" Informatics 8, no. 2: 32. https://0-doi-org.brum.beds.ac.uk/10.3390/informatics8020032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop