Next Article in Journal
Survey of Localization for Internet of Things Nodes: Approaches, Challenges and Open Issues
Previous Article in Journal
Secure Internal Data Markets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crowdsourcing Framework for QoE-Aware SD-WAN †

Faculty of Business and Information Technology, University of Ontario Institute of Technology, Oshawa, ON L1G 0C5, Canada
*
Author to whom correspondence should be addressed.
This paper extends the preliminary results presented by the authors at the PVE-SDN 2019 workshop in IEEE Conference on Network Softwarization (NetSoft 2019).
Future Internet 2021, 13(8), 209; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13080209
Submission received: 2 July 2021 / Revised: 28 July 2021 / Accepted: 11 August 2021 / Published: 15 August 2021
(This article belongs to the Section Internet of Things)

Abstract

:
Quality of experience (QoE) is an important measure of users’ satisfaction regarding their network-based services, and it is widely employed today to provide a real assessment of the service quality as perceived by the end users. QoE measures can be used to improve application performance, as well as to optimize network resources and reallocate them as needed when the service quality degrades. While quantitative QoE assessments based on network parameters may provide insights into users’ experience, subjective assessments through direct feedback from the users have also gathered interest recently due to their accuracy and interactive nature. In this paper, we propose a framework that can be used to collect real-time QoE feedback through crowdsourcing and forward it to network controllers to enhance streaming routes. We analyze how QoE can be affected by different network conditions, and how different streaming protocols compare against each other when the network parameters change dynamically. We also compare the real-time user feedback to predefined network changes to measure if participants will be able to identify all degradation events, as well as to examine which combination of degradation events are noticeable to the participants. Our aim is to demonstrate that real-time QoE feedback can enhance cloud-based services and can adjust service quality on the basis of real-time, active participants’ interactions.
Keywords:
QoE; SDN; QoS; crowdsourcing

1. Introduction

1.1. Background and Motivation

Real-time multimedia content streaming over the Internet has become a primary application in several industries such as communication, education, interactive gaming, and entertainment. The main bulk of Internet traffic today is multimedia content, particularly on-demand video and live video streaming. Multimedia services can be characterized by (1) the content, (2) the transmission methods, and (3) the services employed to enable content exchange between different parties. Multimedia service design also takes into consideration user requirements, which include cost of service, ease of content accessibility, content quality, and multimedia desirability [1]. Furthermore, Internet usage has shifted toward content-centric access rather than host-centric. User expectations are also continuously elevating, and multimedia content providers are becoming more aware of the importance of service quality.
With the emergence of high-functioning mobile devices, network service providers are continuously trying to support a wider range of applications and quality of service (QoS) requirements with highly utilized network capacities. Traditionally, service providers evaluated the quality of multimedia streaming by focusing on network conditions and their corresponding QoS parameters such as delay, jitter, bandwidth, and packet loss. However, more recently, attention has shifted toward quality of experience (QoE), which is a subjective and user-centric assessment technique based on user perception of the service. Once mobility is introduced, packet delays and losses in multimedia streams become more common and maintaining QoS becomes more challenging. As such, QoE-based assessments are quickly becoming important metrics for managing user quality expectations [2].
There are two main approaches for QoE assessments: objective and subjective methods. While objective quality assessments aim to provide a quantitative estimation of such perception (without requiring user input), subjective quality assessments provide direct and accurate feedback of users’ perception.
Objective QoE assessment techniques are based on network analysis and technical comparisons that aim to produce a quantitative estimation of quality. This is tightly related to the QoS of an application or service. For instance, peak signal-to-noise ratio (PSNR) is considered an objective approach for measuring quality, as it assesses how much similarity exists between two different video images. It is widely used in video streaming assessment [3], where a higher PSNR value denotes higher similarity between the original and received video images.
Subjective techniques are based on user interaction and feedback. A common subjective approach is mean opinion score (MOS) based on a quality rating system on a scale from 1 to 5, where 1 stands for ‘bad’ and 5 stands for ‘excellent’, with 3.5 representing the minimum acceptable threshold for a video MOS [4]. MOS scaling still may give room for inaccurate representation of user perception [5], due to the non-similarity of the scale interpretation by different participants; as such, other approaches based on a simple like/dislike of the service quality have been proposed to better capture user perception.
While subjective approaches reflect users’ perception more directly and accurately, they are expensive to roll out because such QoE assessments require a large scale of participants in order to obtain reliable results. They are also time-consuming, as traditional QoE experiments are conducted in a controlled lab environment, making it difficult to collect sufficient results from experiments in a limited timeframe [6]. QoE crowdsourcing techniques have been proposed to overcome these constraints, where, taking advantage of employing a diverse group of online participants, getting subjective results becomes relatively cheaper and more efficient than traditional methods.
It must be noted that the use of subjective and objective quality assessments is not exclusive; in fact, the two approaches can be used to complement each other. Data collected from users’ feedback and from network devices can be analyzed together to better match user perception with network conditions. A software-defined network provides an excellent framework for such data collection and analysis through controller-based applications, which could employ learning techniques to deploy, test, and improve global network decisions in response to degradation of service quality.
In terms of transport services, the Real-Time Transport Protocol (RTP), along with Real-Time Control Protocol (RTCP) or Real-Time Streaming Protocol (RTSP), provides a reliable foundation for real-time services [6]. The emergence of software-defined networking (SDN) has promised better control and management of end-to-end service quality in the networks [7]. Leveraging SDN’s advantages, such as dynamic programmability, central control, cost efficiency, and adaptability to changes in a networking environment, makes the software-defined wide-area network (SD-WAN) a desirable architecture to control QoE for multimedia streaming applications and services. SD-WAN controllers can make use of QoE assessments collected through user feedback by enhancing and updating streaming routes. Furthermore, SD-WAN controllers incorporate the emerging concept of intent-based networking (IBN), which would allow network administrators to define their QoE requirements in the form of service intents, i.e., as a collection of policies, goals, and network-level performance targets, rather than specific device-level configuration parameters. Such an implementation greatly facilitates the management of service-level QoE requirements for multimedia service delivery.
In this paper, a framework is proposed based on a combination of real-time QoE measurement application and QoS quality parameters, which can accommodate a variety of streaming protocols while deployed in an SD-WAN. This framework emphasizes the correlation between QoE and QoS and how the overall user QoE perspective can be affected. We study how dynamic changes in the network could affect the performance of different streaming protocols, and how the streaming protocol adjusts to network changes and, consequently, the perceived QoE of the streaming content. The main protocols used in our model are RTP, RTP over TCP, SCTP, and UDP.
The proposed framework is based on real-time alerts of quality degradation from the user side during live video streaming over a cloud-based SD-WAN environment. A QoE-rating application is deployed on the user devices to collect user feedback during a real-time streaming session. Through this application (which can be deployed as a plugin on web browsers or multimedia players), users send individual or cumulative feedback to the SD-WAN controller (directly or indirectly through a data summarization gateway) to inform the controller about potential problems in the video streaming. This QoE feedback would enable the SD-WAN controller to investigate problems in the service path and to take corrective action by making changes to the topology, resources, or routes of the content delivery network. The main feature of our proposed framework is the ability to identify changes in the QoE of video streams with dynamically changing network conditions in real time.

1.2. Contributions

This paper makes the following contributions:
  • The design of a four-tier framework for QoE-aware content delivery over SD-WAN.
  • Performance analysis of various transport protocols for multimedia delivery, through a set of human-paired comparison (HPC) experiments.
  • Performance analysis of real-time SD-WAN rerouting under quality degradation in an intent-based network environment.
The experiments presented in this research aim to answer the following questions:
(1)
Will the participants be able to identify quality degradations at the same time as the network condition changes?
(2)
Does the content of the video affect quality-rating decisions by the participants despite a noticeable quality degradation?
(3)
When comparing objective analysis to human-perceived quality, would the results be consistent?
(4)
Which protocols are most suited for unstable or rapidly changing networks?
(5)
Is there consistency in protocol performances when video content changes, or when different events and scenarios occur?
For each case, we present an analysis of the experimental results and reflect on how they address the relevant question from the abovementioned list.

1.3. Paper Outline

The remainder of this paper is organized as follows: in Section 2, a review of the relevant literature is presented; in Section 3, we present an overview of the SD-WAN network and QoE measurement models; in Section 4, we present our crowdsourcing QoE-Aware design; in Section 5, we describe the implementation of our test bed; Section 6 presents the performance results and an analysis of the proposed model; Section 7 provides conclusions and directions for future work.

2. Related Works

A comprehensive discussion of network and service performance indicators for multimedia applications was presented in [1]. The most important performance indicators include the following:
  • One-way end-to-end delay (including network, propagation, and equipment) for video or audio should be within 100 to 150 ms.
  • Mean opinion score (MOS) levels for audio should be within 4.0 and 5.0. MOS levels for video should be between 3.5 and 5.0.
  • End-to-end delay jitter must be short, normally less than 250 μs.
  • Synchronization of intermedia and intramedia should be maintained using suitable algorithms. To maintain intermedia synchronization, the differential delay between audio and video transmission should be within −20 ms to +40 ms.
The following parameters should also be taken into consideration while designing a QoE framework for multimedia services [4]:
  • Video quality at the source.
  • How the content is delivered over the network and QoS quality parameters.
  • User perception, expectations, and ambiance.
Objective QoE measurements rely on quantitative measures from the network and end-devices to predict users’ perception of video quality. As mentioned earlier, PNSR is one such metric that has been used extensively [3]. One drawback of PSNR is that it does not take into consideration how human perception works. The structural similarity index (SSIM) [8] is another measurement approach for estimation of perceived visual distortion based on the structural distortion of the video. SSIM addresses the shortcoming of PSNR by combining other factors such as contrast, luminance, and structure similarity, and it compares the correlation between the perceived video images and the original video images; hence, it can be considered a full-reference model, where a higher ratio denotes a higher structural similarity. Another objective approach is the Video Quality Metric (VQM) [9], which is a metric to measure the perception of video quality as closely as possible to human perception. The VQM metric is designed to be a general- purpose quality model for a range of video systems with different resolutions, frame rates, coding techniques, and bit rates. VQM measurement takes noise, blurring, and block and color distortions into consideration. VQM gives an output value of zero if no impairment is perceived, and it increases with a rising level of impairment. Research on quantitative QoE estimates has more recently focused on the adaptation of such measures in cloud and content streaming networks. For instance, in [10], the authors used Server- and Network-Assisted DASH (SAND) to provide communication between the end device and the network node, such as a WiFi access point, and to reallocate network resources to avoid empty buffers on the client side that cause QoE degradation. In [11], the authors proposed a framework for dynamic adaptation of bitrate to the end nodes to provide a fair allocation of resources to all users. In both cases, network edge nodes are involved in quality estimation and resource allocation.
Subjective QoE measurements focus on adopting and improving MOS scores for various multimedia services. The focus is mainly on definition of the right score, as well as on methods to collect them efficiently from the users. More recently, crowdsourcing techniques have been considered for collecting user QoE feedback. In [5], the authors designed a crowdsourcing framework that overcomes some of the disadvantages of the MOS technique, namely, (1) difficulty and inconsistency for participants to map their ratings to five-point MOS scaling, (2) rating scale heterogeneity, and (3) the lack of a cheat detection mechanism. This new approach provides comparable consistency to the MOS methodology by introducing the ability to have QoE measured in a real-life environment using crowdsourcing rather than a controlled environment in a laboratory. Another approach, the OneClick framework [12], captures user perception in a simple one-click procedure where experiments are held to gather user feedback, and then collected data are processed to calculate the accumulative QoE of all users. Programmable QoE-SDN APP was discussed in [13], which aims to improve QoE for video service customers by minimizing the occurrence of stalling events in HTTP Adaptive Streaming (HAS) applications, and by utilizing the forecast and rating estimates provided by mobile network operators.
In order to tackle the requirements of multimedia over IP, multimedia services should have the ability to classify traffic, prioritize different applications, and make the necessary reservations accordingly. The Internet Engineering Task Force (IETF) developed an Integrated Service framework that consists of real-time and best effort services. RTP, along with RTCP and RTSP, provides a reliable foundation for real-time services. However, this framework has had limited deployment due to complexity and backward compatibility.
Some past research efforts focused on the specific use of SDN controllers and the importance of the selection of SDN controllers in designing network models. Recently, the research in [14] focused on using intent-based programming in an Open Network Operating System (ONOS) [15] to allow more dynamic monitoring and rerouting services by using intents. Intent Framework [16] enables applications to provide network requests in the form of a policy, not a mechanism. Intents provide high-level abstractions where programmers only focus on the task that should be accomplished, rather than how these tasks will be translated into low-level rules and how these rules can be installed into network devices. The abovementioned research works aim to enhance Intent Framework to compile multiple intents simultaneously and to reoptimize paths on the basis offlow statistics.
Leveraging SDN in routing would allow service providers to customize routing services for applications [17]. This approach is based on a new open framework called Routing as a Service (RaaS) by reusing virtualized network functions in which customized routing services are built on the routing paths for different applications.
Several prior works examined the possibility of managing QoS and QoE using the advantages of SDN architecture. In [18,19], the authors focused on how QoE can be managed efficiently over cloud services, and they investigated the challenges facing QoE management in cloud applications, especially the quality of multimedia streaming. The goal of QoE management in that environment was to provide high-quality services to users on the cloud while taking into consideration the cost tradeoff.
Implementing QoS over SDN in [20], the authors designed an approach to introduce QoS into IP multicasting using SDN in order to have proper and flexible control management of the network environment. The OpenFlow protocol was adopted to allow a controller to monitor IP multicasting statistics for each flow to provide end-to-end QoS. They implemented a learning algorithm to allocate required network resources without dropping those low-priority packets which could impact the performance of those flows. It was, thus, demonstrated that SDN could be used for network quality management.
On the subject of QoE models, two different approaches have been researched: in-network models and crowdsourcing models. The prior studies in this area are presented below.

2.1. In-Network QoE Models

In [21], the authors proposed an In-Network QoE Measurement Framework (IQMF). The user feedback was not considered as an input parameter in this scheme; however, the streams are monitored within the network. Two QoE metrics are adopted by IQMF for measuring experience: (1) quality of video, and (2) switching impact over HTTP Adaptive Streams. IQMF offers those measurements for QoE as a service through an API (Figure 1). This service can be provided to a content distributor or a network provider. Leveraging SDN allows the control plane to interact with IQMF framework, thus providing more flexibility to analyze and measure a participant’s QoE. It also enables IQMF to utilize traffic managements dynamically and to provide scalability for deploying more measurement agents. IQMF interacts with the OpenFlow controller that maintains the forwarding behavior of the network to ensure that all necessary information about flow duplications is provided for better monitoring of QoE.
The QoE measurement framework operates by filtering HTTP packets in the traversing traffic. It then identifies HTTP GET requests, and examines those requests for identification of manifest files such as Media Presentation Description (MPD) files. The MPD parser extracts information from the MPD file—different representations that include references to different resolutions, quality multitude, and playback codecs. The measurement engine then merges the parsed information with supplementary details from the HTTP packet filter in order to monitor the behavior of the user whilst playback continues.
Another model, described in [22], aims to enhance the capabilities of Dynamic Adaptive Streaming over HTTP (DASH)—a standard for multimedia streaming that changes the quality of content presentation automatically in accordance with network conditions. In this research, the authors took into consideration the QoE as perceived by the user, and then integrated user perception with dynamic changes in the content. Such enhancements will provide more efficient QoE measurements and increase positive feedback by the users. This model allows automated estimation of MOS measurements from QoS parameters such as video bitrate, video frame rate, and video quantization parameter.
The following three metrics were used in this model:
  • Buffer underflow/overflow: to prevent freezing images and losing packets, buffer thresholds were specified. TCP is also used for reliable transmission.
  • Frequencies and amplitude of switching quality: the frequency of quality switches of the represented content was identified as one of factors affecting QoE.
  • QoS media parameters: the parameters associated with the content of media.
It was found through experiments that it took several seconds to measure presentation intervals that are affected by media parameters. However, it takes more time intervals in terms of switching quality and re-buffering, thus impacting QoE. “The representation quality switch rate required a recursive approach where the MOS is calculated on the basis of previous MOS variations in order to take into account the entity of the quality switch in addition to the rate” [22]. This model has shown potential in enhancing the DASH logic of adaptation capabilities for selecting the best video quality levels, through integrating the QoE monitoring.
The OpenE2EQoS model, discussed in [20], aimed to introduce QoS into IP multicasting using SDN for flexible control management of network environment. In this approach, the OpenFlow protocol was adopted to allow the controller to monitor IP multicasting statistics for each flow and to provide end-to-end QoS. The system makes use of the Additive Increase/Multiplicative Decrease (AIMD) algorithm to enhance adaptive learning of efficient bandwidth utilization over time. An N-dimensional statistical algorithm is used in this approach to redirect low-priority traffic packets from overly crowded links while maintaining priority for multimedia packets.
In [23], a method was proposed for predicting QoE using machine learning algorithms leveraging SDN. An architecture was designed that uses previously measured MOS values from users, collected during different network conditions. These data, along with objective measures, are supplied to machine learning algorithms to predict MOS values for the current network conditions. The SDN QoE Monitoring Framework (SQMF) [24] is a monitoring application that aims to preserve QoE for both video and VoIP applications in real time, regardless of unexpected network issues, by continuously monitoring network parameters and using QoE estimation models. In [25], a new QoE-Aware management architecture over SDN was proposed, which was able to predict MOS by mapping different parameters of QoS into QoE. The proposed framework was designed to autonomously control and allocate the infrastructure of underlying network resources with the ability to avoid QoE degradation, optimize resource use, and improve QoS performance.

2.2. Crowdsourcing QoE Models

A general crowdsourcing framework for QoE capture was discussed in [5]. The objective in that work was to overcome some disadvantages of the MOS technique by utilizing the paired comparison technique, as well as the ability to have QoE measured in a real-life environment using crowdsourcing rather than a controlled environment in a laboratory. Four case studies were conducted using audio and video content to evaluate the effectiveness of the proposed framework.
The key features of using this framework for QoE evaluation are as follows:
  • It can be generalized for different types of multimedia content with no need for adjustments.
  • Use of the pair-comparison rating technique provides a simpler user feedback comparing to MOS technique
  • Results from compared judgements can be evaluated by probability models
  • Reward and punishment schemes are used, where users are given appropriate incentives to give honest feedback in order to obtain trustable quality measures.
This framework is a promising evaluation technique to measure QoE; however, as the authors of the study pointed out, this is not a QoE evaluation; rather, it is the quality of perception (QoP). As they pointed out, “QoP reflects a user’s detectability of a change in quality or the acceptability of a quality level” [5].
The OneClick Framework [12] captures user perception in a simple one-click procedure. Whenever a user is not satisfied with the quality of the viewed content, they can click on a button that informs the system of their dissatisfaction. In contrast to the MOS technique, a user does not have to decide between different grading scales and what best suits their perception. OneClick is a real-time framework which means that the clickable button is available along the whole viewing experience. The user can record their dissatisfaction several times along the process where each click is time-captured. This framework is based on the PESQ (Perceptual Evaluation of Speech Quality) and VQM (Video Quality Metric), which are both objective measurements.
The key advantages of OneClick Framework are as follows:
  • Initiative: Participants are not required to decide about the perceived quality, whereas they only report their dissatisfaction through one click of a button.
  • Lightweight: The framework does not require any specific deployments and is not expensive to roll out.
  • Time-aware: Participants can record their dissatisfaction several times along the process where each click is time-captured, which indicates how perception can change over time.
  • Independent: OneClick can be used in conjunction with several applications simultaneously.
OneClick includes two main steps: (1) experiments are held to gather users’ perception feedback during different network conditions; (2) collected data are then processed to identify QoE measurements. Figure 2 shows the full OneClick process assessment technique with the following steps: (1) preparing test materials (optional); (2) asking the subjects to do the experiments; (3) inferring average response delays; (4) modeling the relationship between network factors and click rates; (5) predicting the click rate given each network factor; (6) summarizing an application’s QoE over various network conditions by comfort region.
Given the simplicity of OneClick’s approach for user feedback collection, a similar approach was used in our work for the user side of our framework, i.e., to allow service users to express their displeasure through a click when the quality of the streaming deteriorates.

Research Gap

Most past research contributions in this area have focused on the quality of video streaming, video analytics, network QoS, and metrics for conducting QoE. In those works, QoE was usually measured and assessed after completion of streaming session/or paired comparisons, and results were sent back to management systems to make changes for enhancing the user QoE in future streaming. However, current proposals often do not provide details on how such mechanisms are deployed and used in a network in real time, e.g., how QoE feedback can be gathered in real time and how it should be communicated to the network controllers to allow dynamic network changes to enhance QoE and streaming quality while participants are observing the session. While techniques such as OneClick provide a real-time mechanism for collecting user feedback, they do not offer a framework for its network implementation and use in network resource control and management. This is the research gap that we are attempting to address in this work, with a particular focus on deployment in a software-defined transport network.

3. Network Model and Assumptions

3.1. Network Model

The real-time cloud-based content-delivery network model is shown in Figure 3. The network model is based on four tiers: content provider tier, SD-WAN tier, data summarization tier, and end-user tier.
The content provider tier produces the multimedia content for distribution over the network. The content can be a live stream or broadcast, music, video on demand, etc., although live video content was our main focus in this paper. The content is made to be accessible and processed by the various servers and forwarding modules in SD-WAN.
The SD-WAN tier consists of a collection of interconnected network switches, controlled by one or more centralized controller(s) through a southbound protocol such as OpenFlow. The OpenFlow protocol is an open-source, standard protocol that governs communication between SDN controllers and switches. OpenFlow allows full control over packet flows, where the controller specifies the routing paths as well as how packets should be processed.
The end-user tier contains a wide range of end devices that are used for accessing the content and provide continuous, real-time feedback of QoE in the form of perceived video quality through SD-WAN. The end-user tier connects to SD-WAN through a number of gateway servers, which form a data summarization tier and are responsible for delivering the content, as well as collecting and summarizing the crowd feedback to the controllers.
The software-defined centralized control plane in SD-WAN allows efficient traffic engineering to meet dynamic service requirements. An SD-WAN controller can implement a global view of the network, making it an ideal physical substrate for cloud-based content-delivery network (CCDN) environments. Implementing QoE applications over SD-WAN allows us to further enhance the quality of streaming by adjusting video quality on the basis of user feedback, not just by relying on QoS SLA.

3.2. QoE Model

The proposed framework uses a subjective crowdsourcing approach for QoE assessment. Crowdsourcing allows subjective measures based on both video-pair comparisons and MOS-based rating comparisons, with the flexibility to choose participants’ demographics if certain demographics are required for specific results.
Despite their scalability, crowdsourcing experiments lack supervision, which makes some results not fully trustable. Researchers should be able to identify trusted and untrusted participants. This can be achieved by designing the crowdsourcing campaigns on the basis of certain best practices. These best practices are majorly concerned with technical implementation aspects of the experiment, campaign, and test design, as well as a thorough statistical analysis of results [26]. Campaigns should be simple enough for participants to understand how the experiment is designed and what is required to complete it.

4. Proposed Framework

4.1. QoE Crowdsourcing

The proposed framework uses real-time QoE crowdsourcing feedback of quality degradation during live video streaming over a cloud-based SD-WAN environment. Figure 4 highlights details of our framework, in which a streaming server transmits multimedia content over the SD-WAN environment. During the video streaming, dynamic network conditions and events may affect the perceived quality at the user end. Several streaming protocols can be considered such as RTP, SCTP, TCP, and UDP for data transmission. The SD-WAN is responsible for routing the video stream and delivering it to the participating users. A QoE-rating application is deployed on the user-end devices to return user feedback during a real-time streaming, through which the participants click on a dislike button when they feel the quality has been degraded.
The QoE-rating feedback application is designed to send REST API requests (if needed, these can be summarized through intermediate servers for scalability) to the SDN controller to inform the controller about potential problems in the current video stream and possibly request corrective actions such as traffic rerouting. The QoE rating application can be deployed as a plugin on web browsers, as a plugin on multimedia players, or as a desktop application. In contrast to the MOS technique, participants do not have to decide between different grading scales; instead, they only alert the SD-WAN controller to quality degradation, thus providing more decisive feedback. The timing between feedbacks could be an indication of quality, i.e., more frequent feedback (“dislike” clicks) indicates a lower quality than less frequent feedback. As such, all clicks are timestamped before transmission. The intermediate servers can collect this feedback and provide summaries (e.g., number of dislikes within a given time interval) to the QoE-control algorithm on the SD-WAN controller for potential actions.
Providing crowd-based QoE feedback would enable the SD-WAN controller to detect problems in the service path and to take corrective action by making changes to the virtual topology of the content delivery network, reassigning users, or rerouting traffic. Ideally, a resource optimization algorithm such as [27] can be executed in real time to respond to QoE degradation; however, the complexity and processing time of such algorithms must be considered in order to provide an effective remedy in real time. The use of crowdsourcing would provide a more scalable and efficient method for collecting feedback, whether to correct real-time problems or to create performance benchmarks.
The SD-WAN QoE-aware rerouting mechanism operates as follows:
  • The transport route on SD-WAN experiences delay or loss on specific network links, causing quality degradation in delivery and resulting in delay, jitter, or loss.
  • When the users notice the video quality degradation, they click the ‘Dislike’ button. The feedback is transmitted to the feedback server through HTTP.
  • The server communicates the feedback to the SDN controller
  • The controller resolves client and server IPs into MACs using REST.
  • The controller retrieve current intent information
  • The controller queries current traffic path.
  • The controller computes alternative paths between client and server
  • The controller installs the modified intent and query new path.
  • A new streaming path is established and QoE enhancement is verified through user feedback (i.e., lack of dislike feedback).
A high-level description of the QoE feedback collection and evaluation applications on the client and server sides is presented in the following Box 1. More implementation details, including controller operations, are described in Section 5.
Box 1. Feedback Collection and Rating Operations.
User-Side QoE feedback Collection Application
 
Establish a TCP connection to the server on a predefined port,
While TRUE {
Wait for user input,
If input received
Send an HTTP POST message to the server
}
 
Server-Side QoE feedback Rating Application
 
Listen on a predefined port for connection requests;
While in a connection {
Listen for HTTP POST messages;
If POST message includes negative feedback
Increment the negative feedback counter;
If negative feedback counter >=THRESHOLD
Send a REST message to the controller
}

4.2. Data Summarization

The issue of scalability is an important challenge in the use of crowdsourcing for QoS control. Certain streaming content services may have millions of users at any time, and receiving and analyzing feedback from them in real time could become a bottleneck. In order to address this issue, we add two elements in our design. Firstly, an intermediary data summarization layer is placed between the end-user tier and SD-WAN to implement a hybrid fog computing operation. The nodes in this layer are responsible for receiving feedback from users in their region, summarizing the feedback, and sending it back to the network controller(s). For instance, this feedback could include the number of dislikes received over a reporting period. The number and location of these intermediary nodes can be optimized to accommodate any processing limits at the controllers.
Secondly, a minimalist approach is used in the design of the user feedback data. As discussed in Section 3, our framework relies on negative feedback (dislike button) that is only given when there is a problem. Therefore, no feedback is expected in all regions at all times as long as the streaming quality is acceptable. When a congestion or failure scenario causes a spike of negative feedback in a region, the intermediary-layer server in that region will collect the feedback and send a summary report to the central controller. This approach will allow for service scalability to a wide area network.
It must be noted that the QoE response time is inherently different from QoS responses. In the case of user-initiated QoE, the feedback is controlled by user actions that typically span a few seconds. QoE-correcting actions from the controller can also be executed within a similar time scale; as such, the limiting impact of propagation delays is less significant in QoE-aware services, as opposed to QoS-aware systems.

4.3. Use of the Proposed Framework for Performance Baseline

In addition to a real-time response to QoE feedback, this framework can be used for planning and performance analysis of future streaming. In such a scenario, the video stream is saved under different protocols and network conditions to measure QoE via user participation. These saved videos are used to construct a paired comparison stimulus on any chosen crowdsourcing platform in a campaign to be advertised to users asking for participation. Such campaigns include instructions of requirements for participation, consent forms, and terms of any designed stimuli. They include processed videos, and paired comparison stimuli are constructed accordingly. Figure 5 illustrates the crowdsourcing pair comparison model design.
Paired comparisons can be seen as an alternative for the MOS-based method, due to its simplicity, where the need to decide between five different ratings is eliminated, making it more intuitive judgment. The paired comparison technique makes it easier for users to express their opinions, make decisions, and experience interaction when multiple factors are applied.
In order to build the performance benchmark, participants are hired to rate videos in a paired comparison stimulus with different streaming protocols to provide feedback on which video has the higher QoE. In any crowdsourcing model design, there must be a method to identify reliable trustworthy participants since experiments are not controlled and lack monitoring that traditional controlled lab environments provide. To verify the reliability of the participant, trap questions must be introduced. Participants are presented with a golden question, where a stimulus is constructed between the original video with no applied processing as a reference and an event degradation processed video. If the participant does not rate the original video with a higher QoE, then the participant is considered unreliable, and their results are excluded. A campaign is run over a period of time depending on the number of participants required to conduct the study. Rating scores can be computed at any point during the campaign to show trends and results. Different methods can be used to compute these scores such as the Crowd Bradley–Terry Model [28].

5. Test Bed Implementation

5.1. Emulation Environment

An SD-WAN environment was created using Mininet and an ONOS remote controller, providing an emulation of a software-defined virtual network similar to a real networking environment, running the kernel, switching, and application code in one single virtual machine using a simple in-line code. ONOS also supports the concept of network intents, which allows the service requirements to be defined in the form of policies. The VLC server was used as the video streaming application. User feedback was collected through a custom-designed plugin for the VLC client. For the purpose of these tests, we integrated the role of the intermediary layer into the hosts, i.e., it was assumed that the feedback response from the hosts represents a summarized feedback from the region they represent. The network intents were reactively created using the ONOS IFWD application.

5.2. Use Case Scenarios

Two experiments were conducted using human participants, one based on paired video comparison using the QoE crowdsourcing technique and the other based on real-time feedback during video streaming in a controlled lab environment. The purpose of these experiments was to attempt to provide answers to the research questions that were posed in Section 1, i.e., to have a better understanding of how quickly and reliably the participants can identify quality degradation, how their results compare to the objective analysis of videos, and how different transport protocols perform with regard to QoE of multimedia content.
The first experiment, human-paired comparison (HPC), was tested using a QoE crowdsourcing campaign of a paired video comparison on a set of processed videos using the subjectfy.us web service [29]. Comparable video sets were created on the basis of four selected HD videos. Each video was 40 s in time, which allowed sufficient time for introducing quality degradation and user reactions without being too long. The videos also varied in bitrate and frame rate, as well as in content type, so that the analysis of results for research questions 1–3 could be reasonably separated from video quality and content. Details of these videos are presented in Table 1.
The scenario files were created by changing delay and loss events every 10 or 5 s, as shown in Table 2, while RTP, Legacy UDP, and RTP over TCP were applied as streaming protocols in different scenarios. These processed videos were used as the golden question for verifying the reliability of participants in comparing the original HD video to the processed (deteriorated) video. If participants chose the processed copy, then their feedback results were considered untrustworthy. The choice of different transport protocols and network conditions allowed us to evaluate results for research questions 4–5.
For the second experiment, random scenario files of 40 s in length were created with timed events changing every 0.5 s during the video streaming session. These scenarios were applied to three HD-quality videos, while RTP, Legacy UDP, SCTP, RTP over UDP, and RTP over TCP were applied as streaming protocols with a result of 150 processed videos. Using the MSU-VQM objective analysis tool [30], comparative results were computed against the original videos in terms of PSNR, VQM, and SSIM.
For these experiments, a network topology was created consisting of one controller, one switch, and two hosts. By using the minievents module script [31], we applied dynamic changes to adjust delay and loss over a period of time during video real-time streaming, in order to measure the impact of changes in network conditions on QoE. The parameters of our event scenarios were selected to check the following three network condition scenarios:
  • Fixed link delay and changing packet loss over time.
  • Switching from high delay to a substantial lower delay with a fixed packet loss.
  • Gradually decreasing delay over time and then gradually increasing it while applying packet loss.
The selection of these particular scenarios was to evaluate how different transport protocols and user perception respond to abrupt or gradual packet loss or delay. In our experiments, link delay was set in the range of 0–80 ms, and packet loss was set in the range of 0–1%. The flow bandwidth was fixed at 50 Mbps. The values of link delay, loss, and bandwidth were chosen through trial and error to generate sufficiently large quality degradation at the user end, having no other significance in our study. A number of scenario files of 40 s in length with timestamped events were used, where delay and loss were changed during the video streaming session. For each scenario, we streamed the videos with different streaming protocols, recorded all output videos, and ran a comparative analysis.
In order to examine the performance of real-time QoE feedback on SD-WAN operation, an SDN-based rerouting experiment was also set up to demonstrate that QoE feedback can be captured and processed in real time during live video streaming session. Feedback can be sent on the spot to the SDN controller to alert the controller of issues in the steaming service. Our intention was to demonstrate how traffic rerouting could be done instantly on the basis of QoE feedback and how the results would compare to the perception of a participant.
For this scenario, a network of three hosts was created: one acting as a streaming server, the second acting as a client, and the third acting as an non-namespace server which can communicate with ONOS SDN controller. The hosts were connected through a network of 10 OVS switch devices with 22 links and 50 flows, where three edge-disjoint paths exist between the client and the server. The choice of this network topology and the number of flows were to allow the network controller to discover alternate paths without having to solve a full optimization problem (which was not part of this study and will be considered in future work). RTP was used as the default streaming protocol. The network topology is shown in Figure 6.
During video streaming experiences, network degradation scenarios were imposed to affect the quality of the network, in order to indicate if and by how much the participating user would be able to detect such changes during video streaming, and how these changes would affect their QoE. Furthermore, the timing of their QoE feedback was recorded within the time window when QoS parameter change events occurred. Another objective of this experiment was to evaluate how streaming protocols would be able to adapt to these changing events, and if all or some of these changing events would be noticeable to participants or not.

6. Performance Evaluation

6.1. HPC Experiment Analysis

For the HPC experiment, there were 2430 paired comparison questions with 259 participants. A total of 243 participants were successful, and 16 failed the reliability test. Ranks were computed using the Crowd Bradley–Terry model [24]. Figure 7 shows participant ratings for Video 1 for each protocol and scenario of events. We collected similar results for other videos in Table 1. Using MSU-VQM, we computed VQM for processed videos from Experiment 1 against the original video. Table 3 shows VQM values for each resulting video.
Table 4 shows the highest ranked protocol in HPC against the best VQM value for each video across different scenarios. Among all the events and scenarios that we studied in the HPC experiment, RTP had the highest rating on average. The same results were concluded from VQM values, i.e., RTP had the lowest VQM values, followed by TCP. UDP only stood out with better values and ranking in Video 3. For Video 1, it was noticed that TCP had the highest VQM values across all scenarios except in scenario 5, with a very low difference compared to RTP; however, in HPC, it was ranked either second or first with a very close rating difference to RTP. UDP on the other hand had the lowest rating among all scenarios in Video 1. It was noted during Video 2 analysis that there was a high ranking gap between RTP and the other two rated protocols in HPC experiment; however, this gap was not noticeable in VQM values. UDP was ranked the lowest among three scenarios out of five in Video 3 at HPC ranks, whereas, in VQM it had the highest values in only two scenarios. For Video 4, TCP ratings in HPC were consistent with VQM calculated values.
By comparing subjective and objective results, we found that HPC and VQM results were consistent, where the highly ranked protocols in HPC mostly had the best (lowest) VQM value across the three protocols per video. This result demonstrates that analysis and human-perceived quality do provide consistent results.
We note that the sequence of events in the scenarios in Table 2 was designed to allow us to monitor how protocols differ in recovering from packet loss and delay (research question 3), how the QoE will be affected, and if participating users will detect changes with one parameter fixed and the others changed. It was found in our experiments that, for the packet loss events, the quality degradation was most noticeable; however, changing the delay parameters in some cases was not noticeable by participants, likely due to the buffering ability of the video player on the client side.

6.2. SD-WAN Rerouting Experiment Analysis

The SD-WAN rerouting experiment was conducted in a controlled lab environment in which each participant watched a 3 min live streaming video and was asked to provide QoE feedback in the form of a ‘dislike’ click where quality is degraded on the basis of timely changed scenarios of event. Participants could provide feedback clicks at any time during the viewing session. In our experiment, due to the limited number of participants, we assumed that the response from a participant represented a summary of feedback from a group of users, i.e., each user essentially represented a summarization server. As such, a 10 s waiting time was enforced between acceptance of two consecutive clicks to better represent the summarization, as well as to ensure that the controller’s rerouting was complete, thus avoiding a ping-pong routing problem.
Figure 8 shows the timing of the vents for QoE-aware rerouting experiment for a sample participant, where red dots represent ping time (right axis), the blue line is the iperf loss percentage (left axis), green bars are VLC player errors, and the gray background indicates active minievents scenarios. It is observed that the streaming quality improved after every participant feedback click, where it was noted that the blue line indicated no further loss on the streaming link. It was noticed that, when applying a typical delay without loss, users did not detect quality degradation, and they typically did not click the feedback button, primarily due to the playback buffering capabilities of the VLC player. With gradually increasing packet loss percentage with minimum delay, the QoE was affected immediately, and the feedback for reroute was accordingly received. By applying degrading changes on two paths out of three, it was found that ONOS kept searching for the alternative routes with every click until the quality was acceptable for the user.
With all participants, it was found that ONOS managed to reroute traffic on the basis of user feedback, and the quality of the stream was enhanced after the controller’s enhancement action. Among all participants, the ONOS reaction and response for the rerouting decision was consistent and timely. It took ONOS 10–15 ms to construct a new route (intent) as a new streaming path and 15–20 ms to reroute the traffic.
These results demonstrate that users’ interactive feedback can be taken into consideration during streaming sessions, and this feedback can be communicated in a timely manner to the SDN controller to alert of an existing issue and that a corrective action is required. It was shown that the rerouting decision can be made on the spot, and quality can be enhanced on the basis of external user feedback.

7. Conclusions and Future Works

In this paper, a real-time QoE crowdsourcing framework was proposed for SD-WAN. The proposed framework is based on a combination of QoE measurement application and QoS quality parameters that accommodate a variety of different streaming protocols. It emphasizes the dependability between QoE and QoS and how the overall user QoE perspective can be affected. We analyzed how dynamic changes of events could affect the performance of different streaming protocols and, accordingly, the perceived quality. We also compared objective and subjective results and found that both results were mostly consistent. It was demonstrated that an SD-WAN controller can receive feedback, detect problems in the service path, and take corrective action by making changes to the routing paths of the content delivery network in a timely manner. Our goal was to prove that real-time QoE feedback could enhance cloud-based services and could adjust service quality on the basis of real-time active participant interaction.
There are several paths for future work in this area. Firstly, the scalability of the framework can be further confirmed by setting up environments with a larger number of participants in the real world. Furthermore, the use of artificial intelligence (AI) can be considered in this operation for both data summarization and network-based rerouting decisions. With AI, it is possible to learn feedback patterns received by external user participation, and an AI algorithm can decide if rerouting is required or not, where there could be a threshold for the number of clicks. This AI algorithm can determine this threshold dynamically and adjust it by learning from user feedback. AI can also be used to determine the suitable level of data abstraction/summarization for the tradeoff between prompt QoE feedback reaction and the volume of feedback traffic. Another path can focus on algorithms for optimizing SDN environments, cloud resources, and paths, taking into consideration the QoS SLA and minimum QoE requirements. Intent-based programming is currently evolving, and new research could leverage the Intent Framework of the SDN controller to optimize rerouting on the basis of QoE.

Author Contributions

This research work was completed by I.E. as part of her thesis (supervised by S.S.H.) for the degree of Master of Computer Science at University of Ontario institute of Technology. I.E. and S.S.H. both contributed to the ideas and methodologies behind the work. Implementation of the test bed and data collection was performed by I.E. The manuscript was written by I.E. and S.S.H. Revisions were implemented by S.S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported through funding from the Natural Sciences and Engineering Research Council of Canada (NSERC) and Ericsson Canada.

Institutional Review Board Statement

Human-based experiments in this work were approved by the Research Ethics Board of the University of Ontario Institute of Technology under application file number 14780.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated and/or analyzed during the current study are not publicly available due to the requirements of the project funding sources, but may be available from the corresponding author on reasonable request and subject to the approval of the funding sources.

Acknowledgments

We thank Subjectify.us and the MSU Graphics and Media Lab for helping us to conduct subjective and objective measurements using the MSU Video Quality Measurement Tool.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Roy, R.R. Handbook on Session Initiation Protocol: Networked Multimedia Communications for IP Telephony; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  2. Zhao, T.; Liu, Q.; Chen, C.W. QoE in Video Transmission: A User Experience-Driven Strategy. IEEE Commun. Surv. Tutor. 2016, 19, 285–302. [Google Scholar] [CrossRef]
  3. Bankov, D.; Khorov, E.; Lyakhov, A. Fast quality assessment of videos transmitted over lossy networks. In Proceedings of the 2014 International Conference on Engineering and Telecommunication, Moscow, Russia, 26–28 November 2014. [Google Scholar]
  4. Fernando, K.; Kooij, R.; De Vleeschauwer, D.; Brunnström, K. Techniques for measuring quality of experience. In Proceedings of the International Conference on Wired/Wireless Internet Communications, Luleå, Sweden, 1–3 June 2010; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  5. Chunlei, L. Multimedia over IP: RSVP, RTP, RTCP, RTSP. In Handbook of Emerging Communications Technologies: The Next Decade; CRC Press: Boca Raton, FL, USA, 1997; pp. 29–46. [Google Scholar]
  6. Wu, C.-C.; Chen, K.-T.; Chang, Y.-C.; Lei, C.-L. Crowdsourcing Multimedia QoE Evaluation: A Trusted Framework. IEEE Trans. Multimed. 2013, 15, 1121–1137. [Google Scholar] [CrossRef] [Green Version]
  7. Kim, H.; Feamster, N. Improving network management with software defined networking. IEEE Commun. Mag. 2013, 51, 114–119. [Google Scholar] [CrossRef]
  8. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Watson, A.B. Toward a perceptual video-quality metric. In Human Vision and Electronic Imaging III; International Society for Optics and Photonics: Bellingham, WA, USA, 1998; Volume 3299, pp. 139–147. [Google Scholar]
  10. Khorov, E.; Krasilov, A.; Liubogoshchev, M.; Tang, S. SEBRA: SAND-enabled bitrate and resource allocation algorithm for network-assisted video streaming. In Proceedings of the 2017 IEEE 13th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Rome, Italy, 9–11 October 2017. [Google Scholar]
  11. Liubogoshchev, M.; Korneev, E.; Khorov, E. EVeREst: Bitrate Adaptation for Cloud VR. Electronics 2021, 10, 678. [Google Scholar] [CrossRef]
  12. Chen, K.-T.; Tu, C.-C.; Xiao, W.-C. OneClick: A Framework for Measuring Network Quality of Experience. In Proceedings of the IEEE Infocom, Rio Da Janeiro, Brazil, 20–25 April 2009; pp. 702–710. [Google Scholar] [CrossRef] [Green Version]
  13. Liotou, E.; Samdanis, K.; Pateromichelakis, E.; Passas, N.; Merakos, L. QoE-SDN APP: A rate-guided QoE-aware SDN-APP for HTTP adaptive video streaming. IEEE J. Sel. Areas Commun. 2018, 36, 598–615. [Google Scholar] [CrossRef]
  14. Sanvito, D.; Moro, D.; Gulli, M.; Filippini, I.; Capone, A.; Campanella, A. ONOS Intent Monitor and Reroute service: Enabling plug & play routing logic. In Proceedings of the 2018 4th IEEE Conference on Network Softwarization and Workshops (NetSoft), Montreal, QC, Canada, 25–29 June 2018. [Google Scholar]
  15. Berde, P.; Gerola, M.; Hart, J.; Higuchi, Y.; Kobayashi, M.; Koide, T.; Lantz, B.; O’Connor, B.; Radoslavov, P.; Snow, W.; et al. ONOS: Towards an open, distributed SDN OS. In Proceedings of the Third Workshop on Hot Topics in Software Defined Networking, Chicago, IL, USA, 22 August 2014. [Google Scholar]
  16. ONOS Intent Framework. The ONOS Project. Available online: https://wiki.onosproject.org/display/ONOS/Intent+Framework (accessed on 28 July 2021).
  17. Bu, C.; Wang, X.; Cheng, H.; Huang, M.; Li, K. Routing as a service (RaaS): An open framework for customizing routing services. J. Netw. Comput. Appl. 2018, 125, 130–145. [Google Scholar] [CrossRef]
  18. Hobfeld, T.; Schatz, R.; Varela, M.; Timmerer, C. Challenges of QoE management for cloud applications. IEEE Commun. Mag. 2012, 50, 28–36. [Google Scholar] [CrossRef]
  19. Sezer, S.; Scott-Hayward, S.; Chouhan, P.K.; Fraser, B.; Lake, D.; Finnegan, J.; Vilijoen, N.; Miller, M.; Rao, N. Are we ready for SDN? Implementation challenges for software-defined networks. IEEE Commun. Mag. 2013, 51, 36–43. [Google Scholar] [CrossRef] [Green Version]
  20. Lin, T.-N.; Hsu, Y.-M.; Kao, S.-Y.; Chi, P.-W. OpenE2EQoS: Meter-based method for end-to-end QoS of multimedia services over SDN. In Proceedings of the 2016 IEEE 27th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), Valencia, Spain, 4–8 September 2016. [Google Scholar]
  21. Farshad, A.; Georgopoulos, P.; Broadbent, M.; Mu, M.; Race, N. Leveraging SDN to provide an in-network QoE measurement framework. In Proceedings of the 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Hong Kong, China, 26 April–1 May 2015. [Google Scholar]
  22. Alberti, C.; Renzi, D.; Timmerer, C.; Mueller, C.; Lederer, S.; Battista, S.; Mattavelli, M. Automated QoE evaluation of dynamic adaptive streaming over HTTP. In Proceedings of the 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX), Klagenfurt am Wörthersee, Austria, 3–5 July 2013. [Google Scholar]
  23. Ben Letaifa, A. Real Time ML-Based QoE Adaptive Approach in SDN Context for HTTP Video Services. Wirel. Pers. Commun. 2018, 103, 2633–2656. [Google Scholar] [CrossRef]
  24. Xezonaki, M.E.; Liotou, E.; Passas, N.; Merakos, L. An SDN QoE monitoring framework for VoIP and video applications. In Proceedings of the 2018 IEEE 19th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), Chania, Greece, 12–15 June 2018. [Google Scholar]
  25. Volpato, F.; Da Silva, M.P.; Gonçalves, A.L.; Dantas, M.A.R. An autonomic QoE-aware management architecture for software-defined networking. In Proceedings of the 2017 IEEE 26th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), Poznan, Poland, 21–23 June 2017. [Google Scholar]
  26. Hossfeld, T.; Keimel, C.; Hirth, M.; Gardlo, B.; Habigt, J.; Diepold, K.; Tran-Gia, P. Best Practices for QoE Crowdtesting: QoE Assessment with Crowdsourcing. IEEE Trans. Multimed. 2013, 16, 541–558. [Google Scholar] [CrossRef]
  27. Haghighi, A.A.; Shahbazpanahi, S.; Heydari, S.S. QoE-aware optimization in cloud-based content delivery networks. IEEE Access 2018, 6, 32662–32672. [Google Scholar] [CrossRef]
  28. Chen, X.; Bennett, P.N.; Collins-Thompson, K.; Horvitz, E. Pairwise ranking aggregation in a crowdsourced setting. In Proceedings of the sixth ACM international conference on Web search and data mining, Rome, Italy, 4–8 February 2013. [Google Scholar]
  29. Subjectify.us. Crowd-Sourced Subjective Quality Evaluation Platform. Moscow State University. Available online: http://www.subjectify.us (accessed on 28 July 2021).
  30. Vatolin, D.; Moskvin, A.; Petrov, O.; Trunichkin, N. MSU Video Quality Measurement Tool. Available online: https://bit.ly/2TDR5tU (accessed on 28 July 2021).
  31. Giraldo, C. Minievents: A Mininet Framework to Define Events in Mininet Networks. 2015. Available online: https://github.com/mininet/mininet/wiki/Minievents:-A-mininet-Framework-to-define-events-in-mininet-networks (accessed on 28 July 2021).
Figure 1. In-Network QoE Measurement Framework [21].
Figure 1. In-Network QoE Measurement Framework [21].
Futureinternet 13 00209 g001
Figure 2. The flow of a complete OneClick assessment procedure [12].
Figure 2. The flow of a complete OneClick assessment procedure [12].
Futureinternet 13 00209 g002
Figure 3. Real-time QoE content-based network model.
Figure 3. Real-time QoE content-based network model.
Futureinternet 13 00209 g003
Figure 4. Real-Time QoE crowdsourcing feedback based on SD-WAN environment.
Figure 4. Real-Time QoE crowdsourcing feedback based on SD-WAN environment.
Futureinternet 13 00209 g004
Figure 5. QoE crowdsourcing pair video comparison model design.
Figure 5. QoE crowdsourcing pair video comparison model design.
Futureinternet 13 00209 g005
Figure 6. SD-WAN testbed network topology.
Figure 6. SD-WAN testbed network topology.
Futureinternet 13 00209 g006
Figure 7. HPC QoE Rating results for Video 1 using Crowd Bradley–Terry model.
Figure 7. HPC QoE Rating results for Video 1 using Crowd Bradley–Terry model.
Futureinternet 13 00209 g007
Figure 8. QoE-based rerouting decisions based on participant feedback.
Figure 8. QoE-based rerouting decisions based on participant feedback.
Futureinternet 13 00209 g008
Table 1. HPC video specifications.
Table 1. HPC video specifications.
ContentFrame WidthFrame HeightBit RateFrame Rate
Video 1Beach waves1920108017,388 kbps50 fps
Video 2Ski views192010805141 kbps30 fps
Video 3Animation192010801249 kbps60 fps
Video 4Wild animals6403601554 kbps30 fps
Table 2. HPC experiment scenarios.
Table 2. HPC experiment scenarios.
Time0 s10 s20 s30 s35 s
Scenario 1
Delay10 ms10 ms50 ms0 ms25 ms
Loss1%0%0%1%1%
Scenario 2
Delay70 ms10 ms5 ms5 ms25 ms
Loss0%1%1%0%0%
Scenario 3
Delay75 ms0 ms10 ms35 ms5 ms
Loss1%0%1%0%1%
Scenario 4
Delay35 ms80 ms5 ms25 ms5 ms
Loss0%0%1%1%0%
Scenario 5
Delay35 ms45 ms45 ms25 ms65 ms
Loss1%0%1%0%0%
Table 3. HPC experiment scenarios.
Table 3. HPC experiment scenarios.
Scenario12345
Video 1
UDP3.453.155.293.183.86
RTP3.513.153.453.154.32
TCP7.288.525.353.684.28
Video 2
UDP4.314.154.394.103.94
RTP4.334.084.364.184.37
TCP4.104.104.114.094.30
Video 3
UDP2.151.822.462.392.37
RTP2.711.633.051.842.74
TCP2.492.332.202.262.47
Video 4
UDP2.532.422.452.492.64
RTP2.552.092.902.962.57
TCP2.012.602.092.182.61
Table 4. HPC/VQM results comparison by transmission protocols.
Table 4. HPC/VQM results comparison by transmission protocols.
Video 1Video 2Video 3Video 4
VQMHPCVQMHPCVQMHPCVQMHPC
Scenario 1UDP/TCPRTPTCPRTPUDPTCPTCPUDP/TCP
Scenario 2RTPRTPTCP/RTPTCPRTPRTPRTPRTP
Scenario 3RTPTCPTCPRTPRTPUDPTCPRTP
Scenario 4RTPTCP/RTPRTPRTPRTPTCP/RTPTCPTCP
Scenario 5UDPTCP/RTPUDPUDP/TCPUDPUDPRTPRTP
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ellawindy, I.; Shah Heydari, S. Crowdsourcing Framework for QoE-Aware SD-WAN. Future Internet 2021, 13, 209. https://0-doi-org.brum.beds.ac.uk/10.3390/fi13080209

AMA Style

Ellawindy I, Shah Heydari S. Crowdsourcing Framework for QoE-Aware SD-WAN. Future Internet. 2021; 13(8):209. https://0-doi-org.brum.beds.ac.uk/10.3390/fi13080209

Chicago/Turabian Style

Ellawindy, Ibtihal, and Shahram Shah Heydari. 2021. "Crowdsourcing Framework for QoE-Aware SD-WAN" Future Internet 13, no. 8: 209. https://0-doi-org.brum.beds.ac.uk/10.3390/fi13080209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop