Next Article in Journal
Drinking Water Security Challenges in Rohingya Refugee Camps of Cox’s Bazar, Bangladesh
Next Article in Special Issue
Teaching Challenges in COVID-19 Scenery: Teams Platform-Based Student Satisfaction Approach
Previous Article in Journal
Atmospheric Micro and Nanoplastics: An Enormous Microscopic Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of the COVID-19 Pandemic on User Experience with Online Education Platforms in China

1
School of Statistics and Mathematics, Zhejiang Gongshang University, Hangzhou 310018, China
2
School of Management and E-business, Zhejiang Gongshang University, Hangzhou 310018, China
3
Department of Computer Science and Information Systems, University of North Georgia, Oakwood, GA 30566, USA
4
School of Tourism and Urban-Rural Planning, Zhejiang Gongshang University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(18), 7329; https://0-doi-org.brum.beds.ac.uk/10.3390/su12187329
Submission received: 7 August 2020 / Revised: 1 September 2020 / Accepted: 2 September 2020 / Published: 7 September 2020

Abstract

:
During the COVID-19 pandemic, social education has shifted from face to face to online in order to avoid large gatherings and crowds for blocking the transmission of the virus. To analyze the impact of virus on user experience and deeply retrieve users’ requirements, this paper constructs a reasonable evaluation index system through obtaining user reviews about seven major online education platforms before and after the outbreak of COVID-19, and by combining the emotional analysis, hot mining technology, as well as relevant literature. At the same time, the variation coefficient method is chosen to weigh each index based on the difference of index values. Furthermore, this paper adopts the comprehensive evaluation method to analyze user experience before and after the outbreak of COVID-19, and finally finds out the change of users’ concerns regarding the online education platform. In terms of access speed, reliability, timely transmission technology of video information, course management, communication and interaction, and learning and technical support, this paper explores the supporting abilities and response levels of online education platforms during COVID-19, and puts forward corresponding measures to improve how these platforms function.

1. Introduction

The global spread of COVID-19 resulted in the suspension of classes for students from more than 60 countries, disrupting the original teaching plans of schools in these countries and regions [1]. As the first country to detect the spread of the virus, China was also deeply affected by it [2]. Under the influence of the COVID-19 pandemic, schools were forced to suspend in China. However, the massive suspension of school is found to affect the teaching progress [3]. In order to minimize the impact of the pandemic on education and control the spread of the pandemic, online teaching has become a necessary strategy to restore the normal teaching order in this special period. In the face of massive demand, some office-meeting software, supported by the strong technical ability and the keen insight into market opportunities, have also developed and improved relevant functions, and have become a platform for teachers and students to realize online teaching together with many education platforms. Among many online teaching platforms, the office-meeting software represented by DingTalk has provided service such as online classroom and online teaching functionalities. However, due to some technical and functional defects, these online teaching platforms have been criticized by millions of students. For instance, the satisfaction score of DingTalk dropped from 5 stars to 1 star quickly [4].
The above phenomenon shows that although the online education industry has a broad application prospect, it exposes some existing problems in its development process, especially during a public emergency period. Since MOOC (Massive Open Online Course) was widely used in 2013, online teaching modes have gradually been known. At the same time, online education has attracted more and more attention due to its advantages such as breaking the spatio-temporal limitations and improving the fairness of education [5]. Furthermore, research shows that the MOOC teaching model can develop teachers’ careers and improve teachers’ teaching skills [6]. However, at present, due to technology restrictions, online education has been focused on vocational classes and tutorial classes without involving basic courses and professional courses. Most schools still adopt the traditional teaching ways [7]. As a result, school suspensions are a common risk aversion measure in the education sector when facing public health emergencies such as COVID-19.
The current pandemic prompts the reform of existing teaching modes in ways that make online education suddenly transit from an auxiliary method to the key way. This change brings obvious difficulties and challenges to the online education platforms, including the changes of users’ concerns regarding online education, new requirements for online education, the satisfaction of these requirements, and the live broadcast. All above issues need to be further thought and discussed.
These platforms provide strong support and help for education during the pandemic period, and bring users a new experience, but also bring a lot of controversy. In order to better understand the disputes from the source, it is necessary to analyze the changes of users’ concerns on these platforms before and after the epidemic. Based on this, in terms of user experience on online education platforms during COVID-19, this paper first collected user comments from seven representative and widely-used mainstream online education platforms, scored comment emotion, and constructed an online education platform evaluation index system. While setting the index weight, the coefficient of variation method and entropy method were used for calculation, and the results of these two methods were compared and the former was finally selected. Finally, based on the obtained index weights, the user experience of each platform before and after the outbreak of COVID-19 was evaluated, so as to analyze the impact of the pandemic on user experience.
On the one hand, this paper aims to see the emergency response level of online education in response to public health emergencies, as well as to explore the development level of online education and the technical support capacity of these platforms. On the other hand, in view of the current disputes regarding online education, this paper explores the source of disputes from the perspective of user experience, and understands the defects of online education platforms, so as to help online education achieve longer-term development. Therefore, we hope that some of the research in this paper can help the world know the impact of the pandemic on education, and make the following research have a clearer understanding of the current level and disadvantages of online education. In addition, the mining of user reviews and the processing methods of user experience are expected to provide enlightenment and experience for other scholars and related companies, and help other industries such as education or tourism to get closer to users’ real feelings and feedback in user experience related research.
The structure of the paper is organized as follows: Section 2 is a literature review to provide a basis for the subsequent establishment of the platform evaluation system; Section 3 selects seven representative online education platforms and collects their user comments. Section 4 completes emotional scores for each platform based on the initial comments and digs out hot issues from comment data and ranks their popularity. Section 5 constructs and quantifies the evaluation index system of online education platforms, and then evaluates user experience. Section 6 provides the summary of the whole paper.

2. Literature Review

This paper focuses on the impact of the pandemic on the user experience of online education platforms. Due to the outbreak of COVID-19, there is very limited research in this area. However, we can still learn from the existing international research on online education. The research method of this paper is to establish an evaluation index system for online education platforms. Therefore, we have reviewed and sorted out the existing international evaluation systems, whose evaluation indexes are mainly based on the traditional teaching evaluation system indexes. Secondly, the purpose of this paper is to study user experience, so the following is a review of the literature related to user experience; and build our indicators based on the influencing factors of user experience studied by previous people. This section will analyze the literature from the above two aspects.

2.1. The Study of Education Evaluation Systems

Due to the severity and urgency of the COVID-19 outbreak in late 2019, school suspension is one of the most common ways of pandemic prevention and control [8]. However, the suspension of classes affects the progress of courses, but the resumption of classes may cause a second spread of the pandemic in schools [9]. Especially in the severe pandemic period, the best way to solve the contradiction between the two is to shift classroom teaching activities from in-person to online, which can effectively control the crowd gathering. Therefore, the online education mode under COVID-19 will become an important way to prevent and control the pandemic and ensure the teaching progress.
Unlike online platforms in other industries, the evaluation of online teaching platforms is not only reflected in technology, but also needs to be measured in terms of the quality of online courses, teaching levels and so on. At present, the traditional educational evaluation system indicators are mainly based on expert scores and industry-developed standards. Representative studies are as follows: Tochot et al. [10] constructed a measurement model consisting of conceptual use, symbolic use, legitimate use, and instrumental use through external evaluations by The Office for National Education Standards and Quality Assessment (Public Organization) (ONESQA). Daytion and Vaughn [11] designed a network course quality assurance system with 7 first-level indicators and 25 s-level indicators based on educational practice goals. From the perspective of the quality of online courses, Lin [12] had established an evaluation system consisting of 4first-level indicators, namely system quality, information quality, service quality and attractiveness, and 16 s-level indicators. The above evaluation indexes are mainly aimed at the traditional offline teaching mode, but do not apply to the online mode.
In recent years, with the popularization of online education, more and more online education platforms have emerged one after another. Meanwhile, higher requirements have been put forward for the teaching/learning outcomes and technical standard that the platforms need to achieve. Therefore, more consideration should be given to the technical and interactive features of online education when evaluating these platforms and courses. For example, Kimberley et al. [13] combined the characteristics of MOOC platform, and compared several assessment methods to find their respective advantages and limitations. Wong and Billy [14] compared four MOOC platforms—namely Coursera, edX, FutureLearn and OpenLearning, and the results show course differences in terms of their duration, learning activities, assessment, social interaction and instructors’ participation. Based on the evaluations in terms of theory foundation, principle of evaluation, evaluation index system, evaluation system and platform design, and implementation over the Moodle network learning system, Xia [15] constructed a relatively complete evaluation system with the characteristics of scientific and practical education. Combined with course quality evaluation indicators, Jin [16] established the open quality evaluation index system of online education through the capability maturity models such as reliability, effective testing, and expert consultation verification. Additionally, in the field of mobile news training, Cervi et al. [17] used the data collected through the post-evaluation survey to prove that MOOCs were effective training tools by evaluating the structure, function, and opinions of participants. Moreover, factors such as policies would also have an impact on the results of education. For example, in the field of media quality education, Caprino et al. [18] described the connection between the media education policies and media literacy levels in 27 member states of the European Union.
Most of the above researches on the evaluation system of online education platforms are from the perspective of curriculum and platform, but less related to user requirements and experience. For example, Wong and Billy [14] scored the platform based on the traditional quality evaluation system, the indicators and scores were improved or given by themselves on the basis of the past studies. However, this paper will show an evaluation system in which the user experience indexes are not only based on the previous indicator system, but also from the user comments. More importantly, under the influence of public health emergencies, the course content, teaching methods of online education and even students’ learning mentality are different from the past. While constructing the indicator system, users’ experience and feedback behaviors should also be further considered.

2.2. The Study of User Experience

User experience (UX) [19] refers to the feelings of users’ feelings before, during and after using a product or system, including emotions, beliefs, preferences, etc. Zahidi et al. [20] pointed out that the factors that affect user experience were the driving factors that triggered user satisfaction and dissatisfaction. Moreover, user satisfaction depends on user needs, expectations, and existing user experience.
Current research on user experience and satisfaction over the platforms is conducted through questionnaire surveys or online reviews to obtain basic data, so as to complete statistical analysis or natural language processing. UX is widely used in the evaluation and optimization research of commercial platforms, but rarely in online education. Fox example, Pappas [21] conducted the research with regards to the influencing factors of purchase intention, according to the real-time experience feedback of 185 users with online shopping experience. Lohse et al. [22] used the user experience questionnaire to evaluate the music, cutting and flickering of the learning video. According to the analysis of customers’ online shopping comments’ vocabularies, Li et al. [23] obtained 11 indicators which constitute to the satisfaction index system of e-commerce platforms. Yang [24] calculated the evaluation score of a certain online courses by establishing a word library with positive/negative emotional tendency, and combining the proportion weight of comments in each category and the gap between the number of positive and negative comments. Compared with the questionnaire data, the analysis based on online user comments is more realistic and objective. The objects of questionnaire analysis are directional and not globally representative. Moreover, the questions set in the questionnaire analysis are subjective. The data of network comments generally contain user objects with various ages and levels, and there is no need to set some specific questions to obtain the data, so the data obtained are more objective. Meanwhile, the evaluation index model built on this basis can help the platform receive continuous evaluation feedback and dynamically understand its advantages and disadvantages.
Current research on online education platforms is mainly focused on the satisfaction evaluation results, willingness for continuous use, and learning influencing factors. Kamali et al. [25] made the conclusion that the help of electronic devices and resources on learning and education is limited. In order to give better effect, the priority should be given to provide a network environment that students can adapt. Tawafak et al. [26] found that the continuance intention depends on the type of technology. Roca et al. [27] verified that users’ continuance intention is determined by satisfaction, which in turn is jointly determined by perceived usefulness, information quality, confirmation, service quality, system quality, ease of use and cognitive absorption. Kravvaris et al. [28] and Mackness et al. [29] found that learners’ autonomy played an important role in learning through the empirical study of MOOC. Ayşenur et al. [30] took the following factors into consideration: perceived usefulness, flexibility, reliability, active participation, instructor response time and consultation. Asarbakhsh and Sardars [31] made the conclusion that system stuck and failed video connection affect user satisfaction through analyzing learning demand, technological design and intervention content. Roth et al. [32] verified that students receiving the course through video conference had lower final grades and were less satisfied with the course and the instructor. Perceval and Tejedor [33] portrayed an overview of the five degrees of communication in education: oral gestural, writing, audio, audiovisual, and digital, which highlighted the changes introduced by the online scenario in the educational process, reflecting on the character of the student, the teacher and the relationship between them. Chen et al. [34] used a questionnaire survey and web crawler to collect comment data of online and offline users, constructed a customer satisfaction index system by analyzing emotion and the existing literature for quantitative analysis, and then forecasted user satisfaction. Most of the above researches focus on platform satisfaction or curriculum setting without evaluating and analyzing the teaching content combined with platform technology. However, they still have reference value for the establishment of an online education platform evaluation system in this paper.
To sum up, there are still some problems in the research on the evaluation system of online education platforms, such as the lack of specific indicators, unclear evaluation objects, and unreasonable weight distribution. The indicators in existing evaluation systems mainly focus on the two aspects of online education curriculum and platform function; that is, setting fixed evaluation indicators from the aspects of course quality, teaching effect, teaching technology, system quality, etc., and realizing the evaluation through qualitative or quantitative methods. However, few people evaluate the level of online education platforms from the perspective of user experience, and they do not consider COVID-19 and the possible impact of the outbreak. Based on this, we expect to realize the evaluation of user experience before and after the outbreak of COVID-19 from the perspective of user evaluation. However, user experience is mostly used in the research of business platforms. Considering that many research results from the factors affecting online education satisfaction can be used as evaluation indicators that impact user experience, this paper will refer to the results of these articles to establish our indicators. Additionally, in the analysis of user satisfaction and experience, more and more researches focus on users’ comments and feedback, so as to score and evaluate the platform and courses. By using the methods of processing and capturing these comments, the index system of online education platforms can be built more accurately and more perfect.
Based on this, in terms of the effects of COVID-19, this paper optimizes the indicators used in previous research, constructs an evaluation system for the online education platform, and finds out users’ focus in order to analyze the impact of the COVID-19 pandemic on the user experience of online education.

3. Data Acquisition

Before analyzing the data, comment data should be captured. This section first selects the mainstream online education platforms during COVID-19, and then grabs the user comment data of the selected platforms before and after the outbreak of the pandemic.

3.1. The Choice of Online Education Platform

At present, there are a large number of online education platforms in China, including MOOC, Tencent Classroom, Xuetang Online, Yu Classroom, etc. However, the quality of these platforms varies greatly, so it is necessary to select a representative platform from many online education platforms. Qimai data is a professional mobile promotion data analysis platform launched by Beijing Qimai Technology Co., Ltd. (Beijing, China), which supports data query of IOS, Android application market, WeChat, small programs, etc. This paper selects data samples of online education platforms on the Qimai platform.
The download volume through online education platforms, comments rank, as well as the platform’s popularity are taken as measurement criteria. During COVID-19, more business software has shifted to developing online education functions and has been widely used. This paper has screened the ranking of “business” and “education” in the Qimai data subcategories (updated on 17 March 2020). The results are shown in Table 1 below. Application refers to the platform object selected in this study. Classification ranking refers to the user usage ranking of the selected platform in the two subcategories of “business” and “education” on Qimai website. Application list refers to the total user usage ranking of the selected platform on Qimai website. The “Keyword Coverage” mainly means that users can find our APP (APP is the abbreviation of Application, which means Application program. It mainly refers to software installed on a smart phone. Moreover, the original system can improve the shortcomings and is personalized to meet the different needs of users) even if they search for more keywords.
It can be seen from Table 1 that in the category of “business”, DingTalk, Tencent Meeting, TIM, and WeChat rank in the top four. In addition, Zoom Cloud, which ranks 8th in this paper, is selected because the business software ranked 5th to 7th have not been transformed into online education platforms. In the category of “education”, though Chaoxing Learning, MOOC did not rank at the top, since the education ranking also contained a non-online learning platform, so Chaoxing Learning and MOOC ranked top relatively on the premise of their network teaching platform. Therefore, Chaoxing Learning and MOOC are selected here as the representative platforms of the online teaching and education industry, while DingTalk, Tencent Meeting, ZoomCloud, TIM, and WeChat are selected as the representatives of the transformation from business software platforms to online education platforms during the pandemic.

3.2. Acquisition of Comment Data on Online Education Platforms

When collecting the data of user comments from DingTalk, Tencent Meeting, ZoomCloud, TIM, WeChat Work, Chaoxing Learning, and MOOC, the moments (time nodes in this paper) of comment collection and the types of comments should be considered. Since this paper mainly compares the user experience of the online education platform before and after the outbreak of the pandemic, the time nodes of comment collection are divided into two stages: 17 February 2020 to 17 March 2020 and 16 November 2019 to 16 December 2019. In addition, there are four types of user comments on Qimai website, namely: all comments, developer reply comments, deleted comments and undeleted comments. The deleted comment data cannot be collected and the developer’s reply to the comment has nothing to do with user experience. Additionally, the number of comments per platform does not represent the number of users on the platform because the number of comments is not a proportion of the number of users. In order to make the comment more timely, the undeleted comment is selected here as the comment data.
Through the analysis of the user usage and comment popularity of the online education platform, the collected data were manually and simply processed, and the invalid words such as emoticons, repeated words and mood words were removed to obtain the number of comments on the seven platforms before and after the outbreak of COVID-19, as shown in Table 2 below.

4. Analysis on the Characteristics of User Comments on Online Education Platforms

Since users’ comments are relatively redundant and scattered, ROST CM5.8.0 software is used to classify the emotional tendency of users’ comments, and NLPIR-Parse software is used to score the emotional tendency of the comments, so as to analyze the online education platform with good user experience and the emotional tendency of users. According to the statistical method of text similarity and the visual analysis technology of semantic network, the hot issues concerned by users are mined as the basis for the construction of the subsequent user experience index.

4.1. Emotion Analysis

Emotional analysis of text is also called opinion mining, tendentiousness analysis, etc., allowing a large number of internet users and objects to participate, such as people, events, products, and other valuable comments on information. These comments’ information expressed by people of all kinds of emotional color and emotion tendentiousness, as well as text sentiment analysis, is the subjectivity of color with emotional text analysis, processing, summing up and reasoning process. The actions in the text, with the help of semantic rules to each user, reviews data assignment by value to determine the size of the emotion and value of positive emotions, with said negative emotion value is negative, neutral emotional value is zero said, and the higher the value, the more positive attitude, with said the lower value, said the more negative attitude. Thus, the corresponding emotional score and emotional tendency of user comments are obtained.
By rating users’ comment emotional tendency and emotion scores, this paper finds out users’ preference for the online education platform before and after the occurrence of COVID-19 and regards it as the initial grading of the platform. Due to the partial limitation of the emotion dictionary in ROST CM5.8.0, NLPIR-Parser software is used to score the comments on emotion, so as to find the online education platform with good user experience.

4.1.1. ROST CM Analysis

As a digital research platform for humanities and social sciences based on content mining, ROST CM [35] is a large, free social computing platform developed by Professor Shen in Wuhan University. At present, it is the only one in China for the purpose of supporting humanities and social science research and is widely used in many fields such as network public opinion [36,37], personalized recommendation [38], emergency rescue decision making [39,40] and so on. The software can realize Weibo, chat, the entire network analysis, site analysis, through analysis, word segmentation, word frequency statistics, English word frequency statistics, traffic analysis, clustering analysis, and a series of text analysis. Text mining does not need a basic computer using this software to conduct text mining does not require a computer foundation, as long as it can be carried out in accordance with the operation step. ROST CM software is competitive in semantic network emotion analysis. In this paper, ROST CM5.8 is used for emotion analysis, and the analysis results are used to integrate the ratio of positive, medium, and negative comments of seven platforms before and after the outbreak ofCOVID-19, as shown in Table 3 and Table 4 below.
(1) From the above analysis, during COVID-19, DingTalk receives better user experience response with positive reviews. It suggests that duringCOVID-19, with plenty of technology and reliable software support, DingTalk can easily cope with the dramatic increase of users. As a business software, it can meet the requirements of teachers and students while responding to various requirements of online education. The following contents will further explore the advantages of this platform and use it as a reference for other platforms. Compared with DingTalk, Chaoxing Learning needs to think of the reasons why users have poor experience. Generally speaking, as an online education software, users will not consider using this platform if there is no technological innovation or functional improvement. A lot of technical and reliable software support is constructive for the later improvement of the online platform.
(2) Compared with before the occurrence of COVID-19, users’ positive comments on Tencent Meeting and MOOC decreased somewhat. The proportion of Tencent Meeting decreased by 5.06%, but the proportion of MOOC positive comments decreased as high as 30.04%. As a self-study software providing learning resources, MOOC prefers to provide electronic books, video, etc., and gives priority to self-study. During COVID-19, in order to improve the quality of students’ learning outcomes, by offering the video teaching, audio conferencing, etc., most colleges and universities carry out the teaching by strongly interactive methods to achieve the same quality with in-person teaching. However, in the face of COVID-19, MOOC did not develop new functions timely enough to improve their disadvantages, resulting in a sharp decline in user experience. For the later evaluation of the online education platform, evaluation criteria can be provided. If the platform can provide video teaching, audio conference and teaching in a highly interactive manner, users will have high commendation on the platform.
(3) The positive and negative comments of WeChat Work, TIM and Zoom Cloud platforms before and after the outbreak ofCOVID-19 did not change significantly. Zoom Cloud’s positive comments decreased by 1.92%, WeChat’s increased by 8.06% and TIM’s decreased by 4.33% during the pandemic. These online education platforms have not been greatly improved on the whole, but compared with Chaoxing Learning and MOOC, their user experience is relatively better. The positive and negative emotional evaluation of the platform by users has reference opinions on the later evaluation of the platforms. Generally speaking, if the negative emotional evaluation of the platform by users is too heavy, the overall evaluation by users on the platform will be lower. For the enterprises that belong to the platform, software upgrade is of vital importance.

4.1.2. NLPIR Emotional Score

NLPIR-Parser [41] big data semantic intelligent analysis platform, which has been developed for more than 20 years, integrates core technologies such as network data collection, natural language processing, text mining and text retrieval. NLPIR emotion analysis mainly focuses on the automatic recognition and weight calculation of emotion words, and adopts the co-occurrence relationship and bootstrapping strategy to generate new emotion words and weight through repeated iteration. In bibliometrics, the common word method of keywords is often used to determine the relationship between the themes in the discipline represented by the document collection. For example, co-occurrence relationship can be used to analyze the relationship between characters in a novel or play. In statistics, bootstrapping strategy is a uniform sampling with a return from a given training set; that is to say, every time a sample is selected, it is equally likely to be selected again and added to the training set again.
The emotional scores of the comments before and after the outbreak of the pandemic (including total emotional scores, positive scores and negative scores) are shown in Table 5 and Table 6 by analyzing the comments data of the seven online education platforms.
According to the above emotional score data, before the pandemic, the feedback of the seven platforms was generally negative, indicating that most users had a poor experience when using the online education platform. From the previous section, the conclusion can be made that during COVID-19, total emotional scores of DingTalk and Zoom Cloud are positive; and Zoom Cloud is the highest, indicating that user experience of Zoom Cloud in general is the best, followed by DingTalk. These two platforms show more advantages after the pandemic outbreak, which indicates that technical response capacity of DingTalk and Zoom Cloud is a worthy reference. In addition, after the occurrence of COVID-19, the negative score of Chaoxing Learning is much higher than its positive score, and its disadvantages are more obvious, leading to a poorer overall user experience.

4.1.3. Visual Analysis Based on Semantic Network

Based on ROST CM’s analysis of the positive and negative proportion of user comments and NLPIR’s rating of user comments’ emotions, it can be seen that DingTalk gets unanimous praise and provides the best user experience, while Chaoxing Learning and MOOC have poorer user experience. In order to obtain users’ main concerns on online education platforms, semantic network visualization analysis was adopted before and after the outbreak of COVID-19 for platforms such as DingTalk, Chaoxing Learning and MOOC.
Semantic network is one of the representations of an artificial intelligence program, which expresses human knowledge construction in the form of a network. It consists of arcs between nodes, where nodes represent concepts (events or things), and arcs represent relationships between them. The semantic network consists of four related parts. The lexical part: Determining which symbols are allowed in the vocabulary, which involves nodes and arcs, and extracting key words from user comment information. The structure part: Describe the constraint conditions of symbol arrangement, specify the node pairs connected by each arc, that is, establish the connection line through the causal relationship between keywords commented by users. The process section: Explain the access process, which can be used to establish and modify the description, and answer the related question, that is, the direction between related words indicates cause and effect. The semantic part: The method to determine the meaning related to description, that is, to determine the arrangement of relevant nodes and their possessive and paired arcs.
The software ROST CM5.8.0 analysis was used to obtain the semantic network relationship shown in Figure 1 and Figure 2. The main purpose is to show the semantic network diagram of user comments to get the similarity of users’ comments on the platform, so as to dig out the most concerned points of users.
From Figure 1 and Figure 2: the “software” is the important node, and “live” is the closest node, reflecting that the major teaching method of DingTalk is live, which has increased the interaction, and better enlivened the learning atmosphere, improving the teaching quality. In addition, before and after the outbreak ofCOVID-19, “five star”, “payment by installments” and other kinds of nodes appear, because DingTalk congested for a while, public relation (PR) explained timely, which caused a topic of “payment by installments to the five-star”. It is a timely response that makes users give five stars and high commendation to DingTalk.
At the same time, the “office” before the pandemic was closer to the central node. The description of the relevant nodes involved two aspects: office and education, among which “mobile phone”, “holiday”, “powerful” and “Internet” nodes were the main evaluation of the business function of DingTalk. This is because before the outbreak of the pandemic, DingTalk was more of an important software for business clocking and office work. However, after the outbreak of the pandemic, most of DingTalk’s nodes focused on educational evaluation, such as “live broadcast”, “school”, “children” and “homework”, while the business experience evaluation was relatively rare. This indicates that DingTalk adjusted its software functions in time to meet the needs of users of online education courses before and after the outbreak ofCOVID-19.
In addition, the semantic relations of Chaoxing Learning and MOOC are as shown in Figure 3, Figure 4, Figure 5 and Figure 6: setting “software”, “rubbish”, “learning”, “class” as the key nodes; “submit”, “server”, “log in”, “collapse”, “waiting” are the closest nodes, demonstrating that Chaoxing Learning has more problems, such as: server crash, unable to log in or submit the learning time due to system failure. In addition, setting “learning”, “rubbish”, “curriculum” and “software” as important nodes, the nodes close to each other include “failure”, “connection”, “duration”, “server” and “progress”, etc. It can be seen from these nodes that connection failure, unsubmitted learning duration, server crash and other problems often occur in the MOOC platform. At the same time, there are also some independent nodes in the figure, including “account” and “homework”. It can be learned that the platform may fail to register an account, submit homework, or refresh website, etc., while “delay”, “serious” and other nodes indicate that there is a serious delay in the MOOC platform.

4.2. Acquisition of Hot Issues in User Comments

In order to dig out the hot issues that users pay attention to from a large number of data, so as to build a subsequent user experience evaluation system, the similarity heat statistics method is used to analyze the similar information from the descriptive information of the text, and the information is ranked according to the frequency of the occurrence of the similar information.

4.2.1. Introduction to Algorithm Principle

In this section, Python is used for text similarity heat statistics. The input is Excel, and only platform comment content is included here. The specific implementation steps are as follows:
(1) Data preprocessing
Pandas, the data analysis package of Python, are used to obtain input information, and the words of the input comment sentences are segmented through the Python Chinese phrase fragment Jieba to form a two-dimensional array.
(2) Dealing with dictionaries
Gensim is a Python library for automatically extracting semantic topics from documents that can be used to process unstructured and numeric text. The corpora.Dictionary method in the library is invoked to generate dictionaries from two-dimensional arrays of words, thus creating a dictionary based on input comment text information that uniquely identifies a word with a numerical number.
(3) Corpus processing
The Bag-of-Words model means to pack all words into one bag without considering their morphology and word order, that is, each word is independent. If you create a dictionary [Jane, wants, to, go, Shenzhen, Bob, Shanghai], the sentence “Jane wants to go to Shenzhen” can be expressed as (1,1,2,1,1,1,0), which is the number of occurrences of the corresponding word in the dictionary. Therefore, a two-dimensional array is converted into a sparse vector through the Doc2Bow method used in Python to build the Bag-of-Words model to form a corpus.
(4) Calculating text similarity
The LSI (Latent Semantic Indexing) model uses SVD (Singular Value Decomposition) to decompose the word-document matrix. SVD can be seen as discovering unrelated index variables from the word-document matrix and mapping the original data into the semantic space. Documents that are not similar in the word-document matrix may be similar in the semantic space. The text topic matrix obtained by LSI can be used for text similarity calculation.
TF-IDF is a statistical method used to assess the importance of a word to one of the documents in a document set or corpus. The importance of a word increases proportionally with the frequency of its occurrence in the document, but decreases inversely with the frequency of its occurrence in the corpus.
In practice, models-used Latent Semantic Indexing (LSI) model calculates the TF-IDF in a corpus, then uses keywords acquisition method to obtain the number of features in the dictionary, finally takes the corpus in the words of the number of features in TF-IDF and dictionary into Sparse Matrix Similarity calculation methods, establishes a sparse matrix similarity and an index.
(5) Calculating the similarity between test data and sample data
Each user comment text was segmented through Jieba, sparse vector of test data was calculated through doc2bow, and finally, the similarity between test number and sample data was calculated, and data with similarity of more than 0.6 was classified as a class of problems.
(6) Statistics on hot issues
The amount of data in each type of problem is calculated and regarded as problem heat. The questions are ranked according to their heat.

4.2.2. Results Analysis

Data from seven online education platforms before and after the occurrence of COVID-19 were statistically analyzed for text similarity and heat. The results are shown in Figure 7 and Figure 8.
The comparison between Figure 7 and Figure 8 shows that before and after the outbreak of COVID-19, the hot issues that users were concerned with on the same platform changed greatly. For example, before the outbreak of COVID-19, users’ biggest comment on the MOOC platform was the lack of landscape function, but after the occurrence of COVID-19, users’ biggest comment on the platform was the problem of crash back. This shows that in different periods, users have different concerns and requirements for the platform. In addition, by comparing the hot issues presented by different platforms, it is not difficult to find that there are certain similarities in these issues, such as stuck and crash back, which are hot issues existing in almost every platform. Therefore, the existing problems of all platforms can be further integrated to establish a unified evaluation index system.

5. Evaluation of User Experience of Online Education Platforms before and after the Outbreak of the Pandemic

In this section, by analyzing the characteristics of comments on online education platforms, the comments of each platform are classified according to the similarity degree, so as to obtain the main problems of the platforms in user experience. According to the word frequencies of user comments, the related word frequency of each type of problem is counted. Then, based on the existing literature and hot issues of user comments, the evaluation index system of user experience of the online education platform is constructed and quantified. Finally, the changes of user experience before and after the outbreak of the pandemic are found out and analyzed.

5.1. Setting of Index System

Based on the Network Learning Evaluation System over the Moodle platform proposed by Xia [14] and referring to the evaluation index system of open online course teaching quality by Jin [15], user evaluation is selected as the analysis basis. Data selection and evaluation methods of Li [20] and Yang [21] are adopted, comment results are analyzed, problems of each platform are integrated to construct the user experience evaluation index system of online education platforms, which contains 5 primary indexes and 15 secondary indexes, as shown in Table 7.
In order to quantify the above-mentioned primary and secondary indicators, setting subsequent indicator weight should be based on an indicator library that contains synonyms, opposites, and related words by user comments. The index library is traversed and the frequency of the words is used as the evaluation method. The next section will explain, in detail, the indicator library corresponding to each level indicator.

5.1.1. The Characteristics of a Platform

Most collected user reviews are focused on the evaluations for system compatibility, stability, response speed and so on. The system characteristic is the basic index for the evaluation of the online education platform quality, which reflects that the network hardware environment of the online education platform still has room for improvement in technical support and support services for learners. Accordingly, under the characteristics of the platform system, two secondary indexes, stability and compatibility, are set here. The stability of the system means that the website can be accessed successfully at any time without errors. System compatibility means that the online education platform can be used by different clients and received by different terminals. Examples of word libraries for stability and compatibility of the system are shown in Table 8 and Table 9.

5.1.2. Support Service

Constructing the systematic evaluation index for an online education platform should consider the teaching environment which directly affects the user experience on the platform: whether the platform can provide students with valuable learning information or guidance to solve their problems in learning; and whether it can guide students to reasonably use the platform and the related learning tools. Based on this, a secondary index of technical and academic support is set under the characteristics of the platform system, which means that relevant customer service personnel provide technical support and learning guidance materials for learners. The examples of relevant word libraries are shown in Table 10.

5.1.3. Platform Video Quality

The problem of “video” was repeatedly mentioned in user comments after the outbreak of COVID-19. For example, while using video conference of DingTalk, there are problems such as “unsmooth picture”, “low picture quality”, “unable to adjust the volume”, “delayed video uploading and downloading” and so on. It can be seen that the continuous development of media technology has put forward higher requirements for the quality of course video on online education platforms. In terms of video information transmission, if students cannot receive information in time, their learning enthusiasm will be greatly reduced. Based on this, this paper sets three secondary indexes to assess video quality, namely, the quality of the video picture, the quality of the video sound, and the timeliness of the video information transmission. Examples of relevant word libraries are shown in Table 11, Table 12 and Table 13.

5.1.4. Requirement of Platform Technology

The network technology of the platform refers to the way of information communication between learners and the online education platforms, among which the effective and reliable Internet technology is the evaluation standard for the advantages and disadvantages of the online education platform, and the simple and user-friendly interface design is an important factor determining the quality of online education and affecting the user’s sense of experience. Therefore, the second-level indexes required by platform technology include interface design, access speed, navigation link, security, reliability, etc., and the examples of relevant word libraries are shown in Table 14, Table 15, Table 16, Table 17 and Table 18.

5.1.5. Platform Teaching Support System

Jun and Yong [42] described the general structure and main functionalities of the open source web-based teaching management system, and incorporated student data, course management, teaching data collection and configuration management of the system into the contents of the online teaching management system. This paper draws on this literature and combines the questions raised by user comments to set four second-level indicators, namely, communication and interaction, teaching function, course management and student status management. The examples of relevant word libraries are shown in Table 19, Table 20, Table 21 and Table 22.

5.2. Indicator Scoring Method

In the previous section, different word libraries were established for the second-level index. In this section, the occurrence frequency of words was taken as the initial score of this index.
Generally speaking, a second-level index consists of two word libraries: positive and negative comments [43]. The second-level index of the system is evaluated by “(frequency of occurrence of positive words—frequency of occurrence of negative comments)/total number of comments on the platform”. However, when the universality of evaluated words is relatively high, the related words library is added, and the form of “related words + adjectives” is taken as the indicator library. For example, the word describing the poor system compatibility is “problematic”, which has strong universality and can also be used to describe other indicators. Therefore, the related word, “version adaptation” is added. Only when two words, “version adaptation” and “problematic” appear in the comment, it can be judged that the comment describes the poor system compatibility. Therefore, when a second-level index contains three vocabularies: positive comment, negative comment and related word, the second-level index of the system is evaluated by “(positive comment frequency containing related word—negative comment frequency containing related word)/total comment number of the platform”.
After obtaining the scores of each indicator on each platform through the above methods, the data is processed and the preliminary scores are obtained.

5.3. Index Weight Setting

After obtaining the preliminary scores, it is necessary to set the weight of each indicator, so as to obtain the total score of each platform. The setting methods of index weights are divided into subjective and objective weighting methods. Subjective weighting methods include expert scoring method and Delphi method, etc., while objective weighting methods include entropy weighting method and coefficient of variation method. Since the experimental data are directly obtained from the network with large amount of data and poor hierarchy, and the objective weighting method does not rely on subjective judgment of human beings and has strong mathematical theoretical basis, the objective weighting method is adopted here to set the index weight. By comparing the weight results of two objective weighting methods—coefficient of variation and entropy method—the coefficient of variation method is used to weight the experimental results.

5.3.1. Coefficient of Variation Method

Coefficient of variation method [44] is a method to calculate the variation degree of each index in the system by directly using the information contained in each index, and it is an objective weighting method. In the evaluation index system, the greater the difference in the value of the same index between different platforms, the greater the weight of the index; the smaller the difference in the value of the same index between different platforms, the smaller the weight of the index.
Suppose that the target vector and each index vector are constructed into a matrix M = (index 1, index 2..., index n) = {X1, X2, ..., Xn}. The coefficient of variation of the ith evaluation index is calculated as follows:
C V i = D x i ¯
where D is the standard deviation of the i evaluation index. The coefficient of variation was normalized, and then the weight of coefficient of variation of each evaluation index can be obtained.
The weights of secondary indexes before and after the outbreak of COVID-19 by coefficient of variation method are shown in Table 23 and Table 24, and the numbers marked in red are the three secondary indexes with the highest weights before and after the outbreak of the pandemic.

5.3.2. Entropy Weight Method

Entropy was first introduced into information theory by Shannon [45], and has been widely applied in engineering technology, social economy and other fields. Information measures the degree of system order when entropy measures the degree of system disorder. For an index, its dispersion degree can be judged by entropy value. The smaller its information entropy is, the greater its dispersion degree is, the more information it provides, and the greater its role and weight it can play in the comprehensive evaluation.
Suppose there are k indicators and n sets of data. First, the target vector and each indicator vector are constructed into a matrix M = (indicator 1, indicator 2..., the indicator k) = {X1, X2,..., Xk}, where Xi = {x1, x2,..., xn}. According to the definition of information entropy in information theory, the information entropy of a group of data is calculated as follows:
E j = l n ( n ) 1 i = 1 n p i j l n p i j
where pij is the proportion of standardized data in this index data. Then the weight of each index can be determined according to the information entropy.
The weights of secondary indexes before and after the outbreak of COVID-19 by entropy weight method are shown in Table 25 and Table 26. The numbers in red are the three secondary indicators with the highest weights before and after the outbreak of COVID-19.

5.3.3. Method Comparison

The coefficient of variation method and entropy weight method are both objective. According to the calculation results, users’ attention to the platform changed before and after the outbreak of the pandemic. However, there are differences in index weights between the two approaches.
The coefficient of variation method calculates the weight according to the coefficient of variation of the index. Table 23 shows the weights of secondary indicators before COVID-19. Three secondary indicators with the highest weight are access speed, reliability, and timeliness of information transmission, indicating that users are more concerned about the smooth use of the platform and basic technical problems. Table 24 is the secondary index weight after the outbreak of COVID-19. The highest three secondary index weights are course management, interaction, learning and technical support, respectively, indicating that user attention is shifting from basic technical problems to the learning function.
The entropy weight method calculates the weight according to the information entropy of the index. Table 25 shows the weights of secondary indicators before the outbreak of COVID-19. Three secondary indicators with the highest weight at this time are navigation links, course management, and communication and interaction, among which the weight of navigation links is much higher than that of other indicators. However, only one comment about navigation links appeared in WeChat Work platform; other platforms have not mentioned this indicator. This is because the entropy weight method amplifies the differences between the index data, leading to extremely high weight of navigation links, but its weight does not match common sense. In general, the pros and cons of the navigation links are unlikely to be the key factors for user experience. In addition, the high weights of course management, and communication and interaction, indicate that users pay more attention to the learning function of the platform. Table 26 shows the weights of secondary indicators after the occurrence of COVID-19. At this time, three secondary indicators with the highest weights are access speed, reliability, and timeliness of video information transmission, indicating that users pay more attention to the technical problems of the platform at this time. Compared with before and after the outbreak of the pandemic, the attention of users under the entropy weight method shifted from the learning function of the platform to the technology of the platform.
In fact, since the outbreak of COVID-19 disrupted normal study plans, the original face to face learning has shifted to online. More and more teachers and students are using online education platforms now, so users pay more attention to a platform’s functionalities, good teaching environment, and effective implementation of learning programs. Based on this, the entropy weight method is not reasonable, while the coefficient of variation method is reasonable. Therefore, the coefficient of variation method is finally selected as the setting method of index weight.

5.4. The Analysis of Evaluation Results

5.4.1. Overall Result Analysis

Due to the low market demand, the number of users for online education platforms was relatively small before the outbreak of COVID-19. DingTalk and Tencent Meeting did not develop an online education function. Therefore, compared with learning functions of the platform, users are more focused on smooth use and technical problems. After the outbreak, because market demand of online education and the number of online platform users rose rapidly, mobile office platforms such as DingTalk and Tencent Meeting also successively introduced the function of online teaching. Due to their competitive parent companies, DingTalk and Tencent Meeting have little technical problems, so users pay attention to the functionalities of online study. In terms of teaching support system, which represents the learning function of the platform, it can be further seen that users pay more attention to communication, interaction, and course management than teaching functionalities and school role management. Teaching functionalities refer to live broadcast, video broadcast, etc. All the platforms selected here have played an online education role during the pandemic period and all have basic functionalities such as live broadcast, video broadcast, etc. Therefore, there is little difference in this index and the index weight is naturally low. Student status management refers to the modules such as online registration, recording lecturing, etc. If the duration of video playing cannot be accurately calculated, it may lead to low scores of students and other situations damaging the user experience. Among the seven platforms, only MOOC and Chaoxing Learning are conducted through course recording, and the learning time needs to be recorded by calculating the video playing time. The other five platforms mostly teach through live streaming, so the demand for learning time recording and the weight of this index is relatively low.
Interaction is effective to improving teaching quality. Group discussion, attendance check by asking questions, and raising hand to speak, promote students’ learning enthusiasm and help instructors understand students’ learning degree and disabuse students of questions. Due to the space distance of online teaching, the interaction between teachers and students is difficult to carry out. Therefore, in order to ensure the quality of online learning, the design of communication and interaction functions is extremely important for the user experience of the online education platform during the pandemic. If the platform can provide rich interactive functionalities, it will be favored by more teachers and students.
Classroom management ensures the orderly development of the teaching plan. The setting of permissions in course management ensures the difference between the identities of online users and clarifies the difference between teachers and students. In course management, the function of reminding students to check in restores the offline scene of class bell ringing and improves the learning atmosphere. Therefore, course management is relatively important.
To sum up, after the outbreak of the pandemic, there is a huge market need for online education platforms. Course management, communication and interaction are the key factors affecting user experience.

5.4.2. Results Analysis of Each Platform

Based on a set of evaluation index system of the user experience, 5 primary indexes including platform system characteristics, customer service support, video quality, technical requirements, the platform teaching platform support system, and 15 secondary indexes including stability and compatibility of system, and user comments are weighted through variation coefficient method, and scores and ranks of user experience are shown in Table 27 and Table 28. In the matrices, the green font indicates the data with the highest score, while the red font represents the data with the lowest score in each row.
As can be seen from Table 27, before the occurrence of COVID-19, the ranking of user experience of each platform from high to low is Zoom Cloud, Tencent Meeting, DingTalk, MOOC, TIM, WeChat Work, and Chaoxing Learning. It can be seen from Table 28 that after the occurrence of COVID-19, the ranking of user experience of each platform by descending order is: DingTalk, Zoom Cloud, Tencent Meeting, WeChat Work, MOOC, TIM, and Chaoxing Learning.
This change is closely related to initiatives taken by education platforms during the pandemic. As an online office software, DingTalk quickly identified its new position, providing a complete set of solutions for online education of all kinds of schools. It also developed functions such as online classroom and health tasks to respond to “school suspension without class suspension”. Although the application of teaching functions was opposed by students who were tired of learning at the beginning, compared with other platforms, its performance during the pandemic was more prominent. The platform had higher stability and compatibility, relatively simple interface design, complete teaching functions, and was favored by teachers and students. Similarly, WeChat Work was originally an online office software, but it also helped with online teaching during the pandemic. WeChat’s group broadcast supports online teaching, while the pandemic prevention collection form of micro-documents helps schools manage students’ health information. The improvement of its teaching functionalities and the reliability of the platform have also won many positive comments. In contrast, as an educational software, Chaoxing Learning could have played a greater role during the pandemic, but its performance was always atthe bottom in terms of user experience score. The reasons for this situation are closely related to the imperfect learning function and low access speed of the platform. The other four platforms also showed slight changes in their rankings, but the differences were not significant.
The performance of each platform on each indicator will be analyzed in detail below.
(1) Zoom Cloud
Zoom Cloudperformed well overall, ranking the 1st before the outbreak ofCOVID-19 and 2nd after, respectively. Before the pandemic, Zoom Cloud’s access speed was the only disadvantage compared with other platforms. Users had a poor experience in this indicator. After the outbreak of the pandemic, Zoom Cloud has greatly improved its access speed, and users’ attention has been transferred to its teaching functionalities. However, Zoom Cloud needs to improve its communication and interaction, teaching functionalities, and student status management. Due to its relatively stable system, simple interface, and easy operation, Zoom Cloud was highly praised during the pandemic period. However, Zoom Cloud did not develop teaching functionalities, instead, it was still a mobile office software for video conferencing and content sharing. It is more suitable for team meetings, where everyone can express and share their views. However, the interaction between teachers and students in teaching is still dominated by teachers’ teaching and supplemented by students’ participation. Therefore, the communication and interaction between teachers and students do not exactly meet the needs of teaching. In addition, users of Zoom Cloud do not distinguish between teachers and students, so it is deficient in teaching functionalities and school role management.
(2) Tencent Meeting
Tencent performed better overall, ranking the 2nd before the outbreak ofCOVID-19 and 3rd after, respectively. Before the pandemic, its access speed, reliability, and timeliness of video information transmission were poor, indicating the existence of problems such as lag and video fluidity. After the outbreak of the pandemic, Tencent Meeting has significantly improved in the above three indicators, but the quality of video sound and the teaching functionalities are deficient. In terms of the quality of video sound, many users put forward problems such as “there is no sound on the recording screen” and “the sound volume cannot be adjusted”. In terms of teaching functionalities, Tencent Meeting has the similarity with Zoom Cloud: as video conferencing software, both of them can only satisfy the need of video teaching, and are unable to count the number of students entering the meeting and record the time of entering meeting, without distinguishing students’ and teachers’ type, resulting in low score in the interaction, teaching functionalities, and student status management.
(3) DingTalk
DingTalk performed better overall, ranking 3rd before the outbreak ofCOVID-19 and 1stafter, respectively. Before the outbreak of the pandemic, the three lowest indexes were communication and interaction, reliability, and interface design. After the outbreak of the pandemic, the reliability and interface design scores increased, but the communication and interaction scores were still low. Although DingTalk was originally a mobile office software, it was successfully transformed and introduced functionalities related to teaching during the pandemic. Thanks to Alibaba Group, 140,000 servers were deployed to ensure the normal operation of the system during the pandemic. In terms of communication and interaction, when the teacher gives a live lecture, students can express their views through the message window. However, the message window is somewhat delayed and the interaction process is not smooth enough. Therefore, although DingTalk performs well compared with other platforms, it is still deficient in communication and interaction.
(4) MOOC
MOOC performed mediocrely overall, ranking 4th before the outbreak ofCOVID-19 and 5th after, respectively. Before the pandemic, MOOC was ranked last in terms of speed, interaction, and reliability. After the outbreak of the pandemic, the interaction and communication have been improved, but the stability of the system, the timeliness of video information transmission, reliability, and school role management have been criticized. Positioning an online education platform, however, MOOC does not aim to provide online learning services to students in school, but to provide an equal learning platform for all, including students and office workers. As a result, students on the platform are learning via video classes from existing elite schools, rather than via live streaming. With video recording method, it has rare interaction. After the outbreak of COVID-19, a large number of students are using MOOC to watch course video. The watching time becomes the important factor to measure student learning conditions for schools. However, due to frequent unsynchronized learning record and low sensitivity, it greatly impacted on the student status management and reduced user satisfaction. The technical problems of the platform have existed before and after the outbreak of the pandemic.
(5) TIM
TIM performed poorly overall, ranking 5th and 6th before and after the outbreak ofCOVID-19. Before the pandemic, it ranked the last in terms of system stability, platform learner support, interface design and security, but it performed well in access speed. After the outbreak of the pandemic, the platform had serious problems in five aspects: system compatibility, platform support for learners, interface design, communication and interaction, and course management. TIM is known as the concise version of Tencent QQ, which can sync QQ friends with low entry threshold. However, its positioning is also suitable for teamwork office software, which has the same problems as Zoom Cloud and Tencent Meeting. At the same time, TIM also performs poorly in terms of technology and system compatibility, which is lack of a suitable version of the tablet and delays in updating, etc. As one of Tencent’s office software, with no advertising push, simple design interface, and practical function, TIM enables Tencent to provide an opportunity to capture the market in the field of mobile office. Without clear profit as well as appearance of pure mobile office software such as DingTalk, TIM is declining.
(6) WeChat Work
The overall performance of WeChat Work is mediocre, ranking 6th and 4th before and after the outbreak ofCOVID-19. Before the pandemic, it ranked last in reliability and navigation links, and performed poorly in timely video information transmission. WeChat Work is the only platform that has problems on navigation links, with critics pointing out that it contains too many advertising links. After the outbreak of the pandemic, WeChat has improved its scores in various indicators. The main problem has shifted to security, and its privacy protection for users is insufficient. Like TIM, WeChat Work is equivalent to office WeChat. Besides live conference, it also undertakes daily information communication, clocking, approval, and other functions. Therefore, as a teaching software, its teaching functionalities are not complete enough. In addition, most users believe that WeChat is a private space, while the chat records of staff or students on WeChat can be viewed by superiors, and the moderator of the meeting can open the microphone of others at will. The setting of these functions is a kind of privacy interference behavior, so users have raised many doubts about the security of this platform.
(7) Chaoxing Learning
Chaoxing Learning had the lowest overall performance, ranking last before and after the outbreak ofCOVID-19. Before the pandemic, Chaoxing Learning performed poorly in three first-level indicators, including system compatibility and video picture quality, including platform system characteristics, video quality and teaching support system. After the outbreak of the pandemic, its teaching functionalities have been improved, but it still ranks the bottom in five secondary indicators, including the quality of video images, timeliness of video information transmission and course management, and there are still many limitations. Chaoxing Learning is a mobile learning platform, which provides electronic literature review, course learning and group discussion functionalities. However, technical problems often occur on the platform, and the platform has been complained of many times due to the system crash during the pandemic. In addition, when the system crashed, the official reply of Chaoxing Learning was that “efforts are being made to repair it, and we call on more students to learn from the wrong peak, so that the stability will be significantly improved”, which shows the weakness of the technical support behind Chaoxing Learning. Due to the limitations of the technology, it also performs poorly in teaching functionalities such as school role management. Many users commented that they were “unable to sign in” and were defined as absent by the system, as “failure to sign in after the wrong peak is still considered as absenteeism, and the homework notification cannot be received”, etc.

6. Conclusions

Through this research, we hope to make up for the few literatures on the differences of online education platforms in case of emergencies, so as to improve the education level from more perspectives. This paper collected online user experience data of the online education platforms before and after the outbreak of COVID-19, and obtained the change of user experience focus by analyzing the data before and after the outbreak. On one hand, the relevant characteristics analysis of online user reviews before and after the outbreak of COVID-19 is carried out, emotional tendencies of user reviews in the two stages are compared, and the method of text similarity statistics are used to retrieve the hot issues in the user reviews. The study found that during COVID-19, the users of the platforms have different concerns and requirements, and there are some similar problems between each platform such as kartun and flashback. According to the existing literature and the characteristics analysis results for the comments from online education platforms, an evaluation index system was constructed. Meanwhile, the coefficient of variation weighting method was used to comprehensively evaluate the 15 secondary indicators. The results show that before the outbreak of the pandemic, users were concerned about the access speed, reliability, and timeliness of video information transmission of the platform, and the user experience of the Zoom Cloud platform was the best. After the outbreak of the pandemic, users mainly focused on course management, communication and interaction, learning and technical support services of the platform, and the user experience of the platform was the most important. Overall, Chaoxing Learning had the poorest user experience and DingTalk performed best.
Based on the above analysis, this paper summarizes the following suggestions, which are expected to improve the user experience of the online education platform during COVID-19:
(1) Improving support service
Providing comprehensive, timely, convenient and fast support services for learners is conducive to the maintenance of learners’ positive learning attitude. In online education courses, there are several problems: untimely video information transmission, slow platform access, and untimely response to questions in class, students’ hope to get corresponding guidance after class, and teachers’ opinions in a timely manner so as to improve their learning effect. Therefore, the delay of feedback should be shortened and the variety of feedback forms should be ensured. In addition, in the online class, there will be more accidents, such as the failure of voice connection, the teachers’ unfamiliarity with the software interface, unable to skillfully operate the software, and diagnosis problems. Moreover, the hardware problems cannot be solved. For example, the microphone cannot be heard or spoken, and even lead to classroom suspension, which wastes a lot of time. Therefore, the platform should simplify the software design and configuration, and provide corresponding customer service to solve user problems at any time. On the other side, the network problem of the platform needs to improve its own technology.
(2) Improving the convenience of interactive communication
While conducting online classroom teaching on the platform, sometimes it is not convenient for learners to interact and communicate with teachers. It is suggested that the platform should be designed in a split screen so that users can simultaneously interact with the platform, thus timely and effectively share and interact with information resources.
(3) Optimizing ease of use
For the problems regarding stability, security and compatibility of the platform, and the invisibility for homework assignment and submission for mobile learning, as well as the block of pop-up questions for video learning, it is suggested that the platform should continue to be optimized.
(4) Enriching platform resource
It is suggested that the platform should provide extended learning resources for users to ensure that the resources cover all disciplines. In addition, more course activities can be added to the platform to continuously improve the enthusiasm of learners.

Author Contributions

T.C. described the proposed framework and wrote the whole manuscript; L.P. implemented the simulation experiments; B.J. and C.W. collected data; J.Y. and G.C. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Social Science Foundation of China(grant number 19ZDA122), Education Science Planning Project “Pandemic Situation and Education” of Zhejiang Province (grant number2020YQJY245), Contemporary Business and Trade Research Center and Center for Collaborative Innovation Studies of Modern Business of Zhejiang Gongshang University of China (grant number 14SMXY05YB), as well as Key project of Higher education research of Zhejiang Gongshang University in 2020 (“The construction path and influence promotion: Mechanism of curriculum thought and politics driven by big data”, grant number Xgz005).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y. How will COVID-19 Impact Global Education? Available online: http://www.chinadaily.com.cn/a/202003/17/WS5e7045e9a31012821727fb8b.html (accessed on 27 July 2020).
  2. Xinhua Net. Chinese Tech Association in Finland Receives COVID-19 Protection Supplies from Hubei. Available online: http://www.xinhuanet.com/english/2020-06/18/c_139147139.htm (accessed on 27 July 2020).
  3. Xinhua Net. China to Take Measures to Prevent COVID-19 Infection on Campus. Available online: http://www.xinhuanet.com/english/2020-02/12/c_138777565.htm (accessed on 27 July 2020).
  4. Reuters. Alibaba’s DingTalk Gets Bad Grades from China’s Stuck-at-Home Students. Available online: https://www.shine.cn/biz/tech/2002182244/ (accessed on 27 July 2020).
  5. Kang, Y. An Analysis on SPOC: Post-MOOC Era of Online Education. Tsinghua J. Educ. 2014, 85–93. Available online: https://xueshu.baidu.com/usercenter/paper/show?paperid=7dd7cb7c462c1ff1286e6a85c762f3e0&site=xueshu_se&hitarticle=1 (accessed on 27 July 2020).
  6. Panchenko, L.F. Massive open online course as an alternative way of advanced training for higher educational establishment professors. Educ. Pedag. Sci. 2013, 156, 1–17. [Google Scholar]
  7. Yan, H. Research on Satisfaction, Advantages and Disadvantages of Commercial Online Education Platform. Ph.D. Thesis, Capital University of Economics and Business, Beijing, China, 2018. [Google Scholar]
  8. Yang, G.J. The Discussion of the Best Timing and Manner of Intervention in School Influenza Outbreaks. Ph.D. Thesis, Chongqing Medical University, Chongqing, China, 2016. [Google Scholar]
  9. Luo, T.; Geng, T.; Kong, D.; Yu, B.; Jin, X.; Hu, Q.; Tan, X.; Wei, S. Analysis on the influence of class suspension on infectious disease prevention and control in schools. China Sch. Hyg. 2012, 33, 465–467. [Google Scholar]
  10. Tochot, P.; Junpeng, P.; Makmee, P. Measurement Model of Evaluation Utilization: External Evaluation. Proced. Soc. Behav. Sci. 2012, 69, 1751–1756. [Google Scholar] [CrossRef]
  11. Dayton, D.; Vaughn, M.M. Developing a quality assurance process to guide the design and assessment of online courses. Tech. Commun. 2017, 54, 475–489. [Google Scholar]
  12. Lin, H. An application of fuzzy AHP evaluation course website quality. Comp. Educ. 2010, 54, 877–888. [Google Scholar] [CrossRef]
  13. Kimberley, F.; Abrar, A.; Alison, C.; Terese, S.; Elizabeth, B.; Josip, C.; Azeem, M.; David, B.; Edward, M. Massive Open Online Courses (MOOC) Evaluation Methods: Protocol for a Systematic Review. JMIR Res. Protoc. 2019, 8, e12087. [Google Scholar]
  14. Wong, M.; Billy, T. Pedagogic Orientations of MOOC Platforms: Influence on Course Delivery. Asian Assoc. Open Univ. J. 2015, 10, 49–66. [Google Scholar] [CrossRef] [Green Version]
  15. Xia, Y. The Design and Development of Network Learning Evaluation System Based on the Moodle platform. In Proceedings of the 2013 Conference on Education Technology and Management Science (ICETMS 2013), Nanjing, China, 8–9 June 2013. [Google Scholar]
  16. Jin, X. Research on the evaluation index system of open online course teaching quality: A case study of Nanjing Open University. Jiangsu Sci. Technol. Inf. 2019, 36, 56–58. [Google Scholar]
  17. Cervi, L.; Pérez Tornero, J.M.; Tejedor, S. The Challenge of Teaching Mobile Journalism through MOOCs: A Case Study. Sustainability 2020, 12, 5307. [Google Scholar] [CrossRef]
  18. Caprino, M.P.; Cervi, L.; Martínez-Cerdá, J.F.; Tornero, J.M.P. Media Education policies in Europe: A comparative overview. In Proceedings of the International Media Education Summit, Prague, Czech Republic, 20 November 2014. [Google Scholar]
  19. Law, L.C.; Roto, V.; Hassenzahl, M.; Vermeeren, P.A.; Kort, J. Understanding, Scoping and Defining User Experience: A Survey Approach. In Proceedings of the Sigchi Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 719–728. [Google Scholar]
  20. Zahidi, Z.; Lim, Y.P.; Woods, P.C. Understanding the User Experience (UX) Factors that influence User Satisfaction in Digital Culture Heritage Online Collections for Non-Expert Users. In Proceedings of the 2014 IEEE Science & Information Conference, London, UK, 27–29 August 2014; pp. 57–63. [Google Scholar]
  21. Pappas, I.O. User experience in personalized online shopping: A fuzzy-set analysis. Eur. J. Mark. 2018, 52, 1679–1703. [Google Scholar] [CrossRef] [Green Version]
  22. Lohse, A.; Aust, A.; Röder, J.; Bullinger, A.-H. Interdisciplinary Adaptation and Extension of the User Experience Questionnaire for Videos in Learning Environments. In Proceedings of the IEA 2018 20th Congress of the International Ergonomics Association, Florence, Italy, 26–30 August 2018; pp. 789–798. [Google Scholar]
  23. Li, S.; Jiang, C.; Li, L. Research on Chinese well-known e-commerce enterprises’ innovation ability based on real comment. Int. J. Comp. Sci. Math. 2020, 11, 54–62. [Google Scholar] [CrossRef]
  24. Yang, X. The Research of Network Curriculum Evaluation Method Based on the Public Opinion. Ph.D. Thesis, Northeast Normal University, Changchun, China, 2014. [Google Scholar]
  25. Kamali, A.; Kianmehr, L. The Paradox of Online Education: Images, Perceptions, and Interests. Us-China Educ. Rev. A 2015, 9, 591–601. [Google Scholar] [CrossRef] [Green Version]
  26. Tawafak, R.M.; Romli, A.B.T.; Arshah, R.A.; Malik, S.I. Framework design of university communication model (UCOM) to enhance continuous intentions in teaching and e-learning process. Educ. Inf. Technol. 2020, 25, 817–843. [Google Scholar] [CrossRef]
  27. Roca, J.C.; Chiu, C.; Martínez, F.J. Understanding e-Learning continuance intention: An extension of the technology acceptance model. International Journal of Human-Computer Studies. Int. J. Hum. Comp. Stud. 2006, 64, 683–696. [Google Scholar] [CrossRef] [Green Version]
  28. Kravvaris, D.; Kermanidis, K.L. How MOOCs Link with Social Media. J. Knowl. Econ. 2014, 7, 461–487. [Google Scholar] [CrossRef]
  29. Mackness, J.; Mak, S.; Williams, R. The Ideals and Reality of Participating in a MOOC. In Proceedings of the 7th International Conference on Networked Learning, Aalborg, Denmark, 3–4 May 2010; pp. 266–275. [Google Scholar]
  30. Ayşenur, E.; Çavdar, D.; Bağcı, V.; Çorbacı, E.C. Factors Predicting e-Learners’ Satisfaction on Online Education. In Proceedings of the Multidisciplinary Academic Conference, Prague, Czech Republic, 19–20 February 2016; pp. 53–60. [Google Scholar]
  31. Asarbakhsh, M.; Sandars, J. E-learning: The essential usability perspective. Clin. Teach. 2013, 10, 47–50. [Google Scholar] [CrossRef]
  32. Roth, J.J.; Pierce, M.; Brewer, S. Performance and Satisfaction of Resident and Distance Students in Videoconference Courses. J. Crim. Justice Educ. 2020, 31, 296–310. [Google Scholar] [CrossRef]
  33. Perceval, J.M.; Tejedor, S. Oral-gestural, writing, audio, audiovisual and digital? The five degrees of communication in education. Comunicar 2008, 30, 155–163. [Google Scholar] [CrossRef] [Green Version]
  34. Chen, T.; Peng, L.; Yin, X.; Rong, J.; Yang, J.; Cong, G. Analysis of User Satisfaction with Online Education Platforms in China during the COVID-19 Pandemic. Healthcare 2020, 8, 200. [Google Scholar] [CrossRef]
  35. Xu, Y.; You, X.; Wong, Y. Evaluation of Tourist Satisfaction based on ROSTCM Method: A Case Study of Five Domestic Terrace Scenic Spot. Tour. Forum 2018, 11, 22–34. [Google Scholar]
  36. Chen, T.; Li, Q.; Yang, J. Modeling of the Public Opinion Polarization Process with the Considerations Individual Heterogeneity and Dynamic Conformity. Mathematics 2019, 7, 917. [Google Scholar] [CrossRef] [Green Version]
  37. Chen, T.; Li, Q.; Fu, P.; Yang, J.; Xu, C.; Cong, G.; Li, G. Public opinion polarization by individual revenue from the social preference theory. Int. J. Environ. Res. Public Health 2020, 17, 946. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Xu, C. A novel recommendation method based on social network using matrix factorization technique. Inf. Process. Manag. 2018, 54, 463–474. [Google Scholar] [CrossRef]
  39. Chen, T.; Wu, S.; Yang, J.; Cong, G.; Li, G. Modeling of Emergency Supply Scheduling Problem Based on Reliability and Its Solution Algorithm under Variable Road Network after Sudden-Onset Disasters. Complexity 2020, 2020, 7501891. [Google Scholar] [CrossRef] [Green Version]
  40. Chen, T.; Wu, S.; Yang, J.; Cong, G. Risk Propagation Model and Its Simulation of Emergency Logistics Network Based on Material Reliability. Int. J. Environ. Res. Public Health 2019, 16, 4677. [Google Scholar] [CrossRef] [Green Version]
  41. Zhang, H.P.; Zhang, R.Q.; Zhao, Y.P.; Ma, B.J. Big Data Modeling and Analysis of Microblog Ecosystem. Int. J. Autom. Comp. 2014, 11, 119–127. [Google Scholar] [CrossRef]
  42. Jun, G.; Yong, M.I. Construction and Popularization of Online-Learning Management System Based on Open Source Software. J. Wuhan Univ. (Nat. Sci. Ed.) 2012, 58, 271–276. [Google Scholar]
  43. Chen, T.; Wang, Y.; Yang, J.; Cong, G. Modeling Public Opinion Reversal Process with the Considerations of External Intervention Information and Individual Internal Characteristics. Healthcare 2020, 8, 160. [Google Scholar] [CrossRef]
  44. Lotfi, F.H.; Nemtollahi, N.; Behzadi, M.H. Ranking Decision Making Units with Stochastic Data by Using Coefficient of Variation. Math. Comp. Appl. 2010, 15, 148–155. [Google Scholar] [CrossRef] [Green Version]
  45. Shannon, C.E. Communication in The Presence of Noise. Proc. IRE 1949, 86, 10–21. [Google Scholar] [CrossRef]
Figure 1. Semantic network diagram of DingTalk before the outbreak of COVID-19.
Figure 1. Semantic network diagram of DingTalk before the outbreak of COVID-19.
Sustainability 12 07329 g001
Figure 2. Semantic network diagram of DingTalk after the outbreak ofCOVID-19.
Figure 2. Semantic network diagram of DingTalk after the outbreak ofCOVID-19.
Sustainability 12 07329 g002
Figure 3. Semantic network diagram of Chaoxing Learning before the outbreak of COVID-19.
Figure 3. Semantic network diagram of Chaoxing Learning before the outbreak of COVID-19.
Sustainability 12 07329 g003
Figure 4. Semantic network diagram of Chaoxing Learning after the outbreak of COVID-19.
Figure 4. Semantic network diagram of Chaoxing Learning after the outbreak of COVID-19.
Sustainability 12 07329 g004
Figure 5. Semantic network diagram of MOOC before the outbreak of COVID-19.
Figure 5. Semantic network diagram of MOOC before the outbreak of COVID-19.
Sustainability 12 07329 g005
Figure 6. Semantic network diagram of MOOC after the outbreak of COVID-19.
Figure 6. Semantic network diagram of MOOC after the outbreak of COVID-19.
Sustainability 12 07329 g006
Figure 7. Results of mining hot issues on the online education platforms before the outbreak of COVID-19.
Figure 7. Results of mining hot issues on the online education platforms before the outbreak of COVID-19.
Sustainability 12 07329 g007
Figure 8. Results of mining hot issues on the online education platforms after the outbreak of COVID-19.
Figure 8. Results of mining hot issues on the online education platforms after the outbreak of COVID-19.
Sustainability 12 07329 g008
Table 1. Rank of online education platforms.
Table 1. Rank of online education platforms.
ApplicationApplication ListClassification RankingKeyword Coverage
DingTalkfirstFirst (business)18,799
Tencent MeetingsecondSecond (business)8526
Zoom CloudeighthEighth (business)5333
ChaoxingseventhSeventh (education)2043
MOOCthirteenthThirteenth (education)7031
TIMfourthFourth (business)4467
WeChat WorkthirdThird (business)6157
Table 2. Number of comments collected by the seven platforms before and after the outbreak ofCOVID-19.
Table 2. Number of comments collected by the seven platforms before and after the outbreak ofCOVID-19.
Teaching PlatformBefore the Outbreak of COVID-19After the Outbreak of COVID-19
DingTalk106174,981
Tencent Meeting14383970
WeChat Work1022959
Chaoxing8005319
MOOC9301819
TIM500400
ZoomCloud560800
Table 3. The proportion of comments with emotional orientation before the outbreak of COVID-19.
Table 3. The proportion of comments with emotional orientation before the outbreak of COVID-19.
Teaching PlatformDingTalkTencent MeetingWeChat WorkZoom CloudTIMChaoxingMOOC
Proportion of positive comments50.05%52.92%40.72%44.35%45.21%0.00%60.33%
Proportion of neutral comments1.31%1.32%2.99%2.09%1.27%0.00%1.31%
Proportion of negative comments48.64%45.76%56.29%53.56%53.52%100%38.36%
Table 4. The proportion of comments with emotional orientation after the outbreak of COVID-19.
Table 4. The proportion of comments with emotional orientation after the outbreak of COVID-19.
Teaching PlatformDingTalkTencent MeetingWeChat WorkZoom CloudTIMChaoxingMOOC
Proportion of positive comments52.26%47.86%48.78%42.43%40.88%22.40%30.29%
Proportion of neutral comments1.32%1.34%1.22%0.73%0.73%1.43%1.32%
Proportion of negative comments46.43%50.81%50.00%56.84%58.39%76.18%68.39%
Table 5. Emotional rating of NLPIR-Parser before the outbreak of COVID-19.
Table 5. Emotional rating of NLPIR-Parser before the outbreak of COVID-19.
Teaching PlatformDingTalkTencent MeetingWeChat WorkChaoxingMOOCTIMZoom Cloud
Total emotional score−50−68−331−1−380
Positive score52322841243
Negative score−102−100−61−3−13−42−3
Table 6. Emotional rating of NLPIR-Parser after the outbreak of COVID-19.
Table 6. Emotional rating of NLPIR-Parser after the outbreak of COVID-19.
Teaching PlatformDingTalkTencent MeetingWeChat WorkChaoxingMOOCTIMZoom Cloud
Total emotional score105−267−14−341−43−20530
Positive score56311429152790758
Negative score−458−381−43−493−122−20−228
Table 7. Setting of index system.
Table 7. Setting of index system.
First Level IndicatorsSecondary Level IndicatorsDefinition of IndicatorIndex Evaluation Method
Platform system characteristicsSystem stabilityThe website can be successfully accessed at any time, there will be no errors, the system does not appear flashback, do not crashEstablish different word banks for the secondary indicators. Through the enumeration of the index thesaurus, the frequency of the occurrence of the words is taken as the evaluation method of this index.
System compatibilityThe online education platform can be used by different clients and can be received by different terminals
Background customer service supportTechnical and academic supportProvide the use of online education platform and learning-related technical support services, and provide valuable guidance and help for the focus and difficult content of the course
Platform video qualityVideo picture qualityThe video picture is clear, and the students will not have visual discomfort and unpleasant mood when watching the video
Video sound qualityThe video sounds clear
Timeliness of video information transmissionTeachers’ PPT and blackboard pictures can be displayed on the students’ interface in a timely manner
Platform technical requirementsInterface designThe interface layout is reasonable, the text expression standard, the font size color suitable for reading
Access speedPlatform page load speed
Navigation linkContent navigation is convenient, clear, in line with the browsing rules
SecurityThe platform shall keep confidential the personal information of users
ReliabilityThe whole system runs stably for a long time, and there are few cases of lag and flashback
Platform teaching support systemExchange interactionThrough live broadcast, group work, homework, and other modules; promotes interaction between students and teachers, between students
Teaching functionsThis platform can carry out learning courses through live broadcast, recorded broadcast, PPT display and other ways
Course managementCan achieve synchronous or asynchronous discussion, that is, through real-time video or audio conference. In addition, it can specify the permissions of relevant personnel of the course, such as teachers and students. Assignments can only be released through teachers, and teachers can only join the group through teachers. Remind students to sign-in in time
Student status managementOnline registration information function, and records each student’s study time
Table 8. Examples of word library for system stability.
Table 8. Examples of word library for system stability.
NameExample 1Example 2Example 3
Good evaluation of system stabilityfluencyruns smoothlystabilization
Poor evaluation of system stabilitysystem crashpoor stabilityUnable to jump to other interface
Table 9. Examples of word library for system compatibility.
Table 9. Examples of word library for system compatibility.
NameExample 1Example 2Example 3
Good evaluation of system compatibilitystabilizationwelldiversity
Poor evaluation of system compatibilityat questionnon-supportbad
A word relating to system compatibilityapplicable versionsuitabilityfit measure
Table 10. Examples of word library for technical and academic support.
Table 10. Examples of word library for technical and academic support.
NameExample 1Example 2Example 3
Good evaluation of technical and academic supportbeneficialefficientexcellent
Poor evaluation of technical and academic supportperform practically no functionno customer servicerubbish
Table 11. Examples of word library for video picture quality.
Table 11. Examples of word library for video picture quality.
NameExample 1Example 2Example 3
Good evaluation of video picture qualityfluencydistinctfree from inhibitions
Poor evaluation of video picture qualityslurlow qualityblank screen
Table 12. Examples of word library for video sound quality.
Table 12. Examples of word library for video sound quality.
NameExample 1Example 2Example 3
Good evaluation of video sound qualityresonantclearfluency
Poor evaluation of video sound qualitynoisethe sound is seriousoff and on
Table 13. Examples of word library for video information transmission.
Table 13. Examples of word library for video information transmission.
NameExample 1Example 2Example 3
Good evaluation of timeliness of video information transmissiongood real-timenot kartunfluency
Poor evaluation of timeliness of video information transmissionkartunmotionlessdelay
Table 14. Examples of word library for interface design.
Table 14. Examples of word library for interface design.
NameExample 1Example 2Example 3
Good evaluation of interface designconcisepracticalas plain as print
Poor evaluation of interface designclutteranti-humanesthetic effect
Table 15. Examples of word library for access speed.
Table 15. Examples of word library for access speed.
NameExample 1Example 2Example 3
Good evaluation of access speedgood speedamazedfluency
Poor evaluation of access speedunable to loadslowingload along while
Table 16. Examples of word library for navigation link.
Table 16. Examples of word library for navigation link.
NameExample 1Example 2Example 3
Good evaluation of navigation linklogicalmeticulousdistinct
Poor evaluation of navigation linklaboriousa mere skeletonunable to browse
Table 17. Examples of word library for security.
Table 17. Examples of word library for security.
NameExample 1Example 2Example 3
Good evaluation of securitycovertPrivacysafety enhancement
Poor evaluation of securityinsecureviolate human rightslack of privacy
Table 18. Examples of word library for reliability.
Table 18. Examples of word library for reliability.
NameExample 1Example 2Example 3
Good evaluation of reliabilityawesomesteadygood
Poor evaluation of reliabilityno one with a hundredcrashlogin not on
Table 19. Examples of word library for exchange interaction.
Table 19. Examples of word library for exchange interaction.
NameExample 1Example 2Example 3
Good evaluation of exchange interactionconveniencespeedinesscan video
Poor evaluation of exchange interactionshade screensreply is swallowedautomatically open mic
Table 20. Examples of word library for teaching functions.
Table 20. Examples of word library for teaching functions.
NameExample 1Example 2Example 3
Good evaluation of teaching functionsconveniencestabilizationwaste
Poor evaluation of teaching functionstroublesomewastemess around
Table 21. Examples of word library for course management.
Table 21. Examples of word library for course management.
NameExample 1Example 2Example 3
Good evaluation of course managementcovertprivacyclear course classification
Poor evaluation of course managementdon’t remindtime boxharass
Table 22. Examples of word library for student status management.
Table 22. Examples of word library for student status management.
NameExample 1Example 2Example 3
Good evaluation of student status managementattentionswitching frequencythe background records
Poor evaluation of student status managementcannot modifynot surediscrepancy
Table 23. The weights of secondary indexes before the outbreak of COVID-19 by coefficient of variation method.
Table 23. The weights of secondary indexes before the outbreak of COVID-19 by coefficient of variation method.
First-Level IndexSecond-Level IndexIndex Weight
Platform system characteristicsSystem stability0.056780
System compatibility0.062532
Background customer service supportTechnical and academic support0.057641
Platform video qualityVideo picture quality0.050599
Video sound quality0.050884
Timeliness of video information transmission0.083637
Platform technical requirementsInterface design0.063798
Access speed0.135036
Navigationlink0.048805
Security0.059239
Reliability0.112449
Platform teaching support systemExchange interaction0.070685
Teaching functions0.048894
Course management0.048957
Student status management0.050066
Table 24. The weights of secondary indexes after the outbreak ofCOVID-19 by coefficient of variation method.
Table 24. The weights of secondary indexes after the outbreak ofCOVID-19 by coefficient of variation method.
First-Level IndexSecond-Level IndexIndex Weight
Platform system characteristicsSystem stability0.061711
System compatibility0.054056
Background customer service supportTechnical and academic support0.082499
Platform video qualityVideo picture quality0.075169
Video sound quality0.056147
Timeliness of video information transmission0.077557
Platform technical requirementsInterface design0.082348
Access speed0.062215
Navigationlink0.000000
Security0.067109
Reliability0.068368
Platform teaching support systemExchange interaction0.082622
Teaching functions0.063522
Course management0.102191
Student status management0.064485
Table 25. The weights of secondary indexes before the outbreak of COVID-19 by entropy weight method.
Table 25. The weights of secondary indexes before the outbreak of COVID-19 by entropy weight method.
First-Level IndexSecond-Level IndexIndex Weight
Platform system characteristicsSystem stability0.035310
System compatibility0.030881
Background customer service supportTechnical and academic support0.050035
Platform video qualityVideo picture quality0.044019
Video sound quality0.031988
Timeliness of video information transmission0.051665
Platform technical requirementsInterface design0.049066
Access speed0.036606
Navigationlink0.383216
Security0.039662
Reliability0.041621
Platform teaching support systemExchange interaction0.052509
Teaching functions0.037286
Course management0.079421
Student status
management
0.036716
Table 26. The weights of secondary indexes after the outbreak of COVID-19 by entropy weight method.
Table 26. The weights of secondary indexes after the outbreak of COVID-19 by entropy weight method.
First-Level IndexSecond-Level IndexIndex Weight
Platform system characteristicsSystem stability0.052743
System compatibility0.061028
Background customer service supportTechnical and academic support0.052945
Platform video qualityVideo picture quality0.046264
Video sound quality0.046546
Timeliness of video information transmission0.101660
Platform technical requirementsInterface design0.059589
Access speed0.145680
Navigationlink0.044719
Security0.055884
Reliability0.128727
Platform teaching support systemExchange interaction0.068772
Teaching functions0.044794
Course management0.044847
Student status management0.045801
Table 27. Scores and ranks of user experience before the outbreak of COVID-19.
Table 27. Scores and ranks of user experience before the outbreak of COVID-19.
Rank1234567
Platform NameZoom CloudTencent Meeting DingTalkMOOCTIMWeChat Work Chaoxing
System stability0.840.750.791.000.000.430.79
System compatibility0.900.771.000.800.270.840.00
Technical and academic support0.891.000.510.780.000.690.55
Video picture quality1.000.870.910.740.760.850.00
Video sound quality0.711.000.950.910.800.840.00
Timeliness of video information transmission0.720.461.000.540.700.020.00
Interface design1.000.720.430.630.000.380.77
Access speed0.190.200.190.001.000.120.19
Navigationlink1.001.001.001.001.000.001.00
Security1.000.850.720.860.000.391.00
Reliability1.000.340.330.100.080.000.49
Exchange interaction1.000.770.270.000.360.690.63
Teaching functions1.000.950.991.000.951.000.00
Course management1.000.930.950.960.910.940.00
Student status management0.780.891.000.860.960.820.00
Total Score99.7870.2165.3246.1433.4820.220
Table 28. Scores and ranks of user experience after the outbreak of COVID-19.
Table 28. Scores and ranks of user experience after the outbreak of COVID-19.
Rank2315647
Platform NameZoom CloudTencent Meeting DingTalkMOOCTIMWeChat Work Chaoxing
System stability1.000.600.970.000.540.760.73
System compatibility1.000.850.980.850.000.940.84
Technical and academic support1.000.780.670.340.000.370.30
Video picture quality1.000.580.740.360.350.530.00
Video sound quality0.710.001.000.840.840.700.74
Timeliness of video information transmission0.900.721.000.130.860.640.00
Interface design1.000.470.940.330.000.410.43
Access speed0.930.961.000.420.940.960.00
Navigationlink1.001.001.001.001.001.001.00
Security0.580.790.871.000.370.000.97
Reliability0.950.841.000.280.820.730.00
Exchange interaction0.401.000.560.720.000.940.19
Teaching functions0.410.000.911.001.000.730.92
Course management1.000.650.670.390.010.320.00
Student status management0.460.551.000.000.700.570.70
Total Score91.1854.4499.7917.079.1844.690

Share and Cite

MDPI and ACS Style

Chen, T.; Peng, L.; Jing, B.; Wu, C.; Yang, J.; Cong, G. The Impact of the COVID-19 Pandemic on User Experience with Online Education Platforms in China. Sustainability 2020, 12, 7329. https://0-doi-org.brum.beds.ac.uk/10.3390/su12187329

AMA Style

Chen T, Peng L, Jing B, Wu C, Yang J, Cong G. The Impact of the COVID-19 Pandemic on User Experience with Online Education Platforms in China. Sustainability. 2020; 12(18):7329. https://0-doi-org.brum.beds.ac.uk/10.3390/su12187329

Chicago/Turabian Style

Chen, Tinggui, Lijuan Peng, Bailu Jing, Chenyue Wu, Jianjun Yang, and Guodong Cong. 2020. "The Impact of the COVID-19 Pandemic on User Experience with Online Education Platforms in China" Sustainability 12, no. 18: 7329. https://0-doi-org.brum.beds.ac.uk/10.3390/su12187329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop