Next Article in Journal
Characterizing 3D City Modeling Projects: Towards a Harmonized Interoperable System
Next Article in Special Issue
Social Force Model-Based Group Behavior Simulation in Virtual Geographic Environments
Previous Article in Journal
An Effective Privacy Architecture to Preserve User Trajectories in Reward-Based LBS Applications
Previous Article in Special Issue
Survey on Urban Warfare Augmented Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments

1
State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100012, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Zhejiang-CAS Application Center for Geoinformatics, Jiaxing 314199, China
4
School of Life Sciences, Arizona State University, Tempe, AZ 85287, USA
5
School of Geology and Geomatics, Tianjin Chengjian University, Tianjin 300384, China
*
Authors to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2018, 7(2), 54; https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7020054
Submission received: 11 December 2017 / Revised: 1 February 2018 / Accepted: 5 February 2018 / Published: 7 February 2018

Abstract

:
Due to their strong immersion and real-time interactivity, helmet-mounted virtual reality (VR) devices are becoming increasingly popular. Based on these devices, an immersive virtual geographic environment (VGE) provides a promising method for research into crowd behavior in an emergency. However, the current cheaper helmet-mounted VR devices are not popular enough, and will continue to coexist with personal computer (PC)-based systems for a long time. Therefore, a heterogeneous distributed virtual geographic environment (HDVGE) could be a feasible solution to the heterogeneous problems caused by various types of clients, and support the implementation of spatiotemporal crowd behavior experiments with large numbers of concurrent participants. In this study, we developed an HDVGE framework, and put forward a set of design principles to define the similarities between the real world and the VGE. We discussed the HDVGE architecture, and proposed an abstract interaction layer, a protocol-based interaction algorithm, and an adjusted dead reckoning algorithm to solve the heterogeneous distributed problems. We then implemented an HDVGE prototype system focusing on subway fire evacuation experiments. Two types of clients are considered in the system: PC, and all-in-one VR. Finally, we evaluated the performances of the prototype system and the key algorithms. The results showed that in a low-latency local area network (LAN) environment, the prototype system can smoothly support 90 concurrent users consisting of PC and all-in-one VR clients. HDVGE provides a feasible solution for studying not only spatiotemporal crowd behaviors in normal conditions, but also evacuation behaviors in emergency conditions such as fires and earthquakes. HDVGE could also serve as a new means of obtaining observational data about individual and group behavior in support of human geography research.

1. Introduction

1.1. Background

Geovisualization started with two-dimensional (2D) mapping, and later developed to include three-dimensional (3D) interactive rendering. Starting from the mid-1990s, with the advent of the virtual reality modeling language [1], the applications of the virtual reality geographic information system (VRGIS) in various fields have been rapidly developing [2]. VRGIS offers an immersive GIS environment, and could be considered an advanced form of geovisualization [2]. Since VRGIS needs to process massive amounts of GIS data, and render VR graphics at the same time, VRGIS applications at this time can only run on high-end workstations, and sometimes even require supercomputers. A virtual geographic environment (VGE) is a VRGIS-based platform for multidimensional visualization, dynamic process simulation, and geocollaboration [3,4]. The second wave of VR has brought better graphics hardware and cheaper helmet-mounted displays (HMDs), greatly improving the availability and affordability of virtual reality (VR) technology. This will also promote an upgrade of VRGIS, and facilitate the development of new theories and methods of geovisualization, geoanalysis, and geocollaboration.
VR-based virtual experiments have been widely used in spatiotemporal behavioral research. In the field of cognitive behavior research, virtual behavioral experiments have been used to study spatial cognition, path selection, obstacle avoidance, and other factors [5,6,7,8,9]. In the game industry, in order to improve game design and enhance the gaming experience, data mining and visualization have been used to analyze users’ spatiotemporal behaviors in massively multiplayer online role-playing games (MMORPGs) [10,11,12]. In education and training, virtual reality simulators are used to train users’ skills and enhance learning effectiveness relating to flight [13], driving [14], fire escape [15], and earthquake evacuation [16]. In the field of urban design and planning, the impact of the urban environment upon pedestrian decision-making behavior [17], pre-occupancy assessment [18], and guidance layout [19] also need the support of observational data about users’ spatiotemporal behaviors, which could be easily acquired in the virtual environment.
In recent years, there have mainly been four groups of crowd behavior research methods. First, computer vision technology has been used to extract pedestrian motion trajectories from surveillance video and other multimedia data. With trajectories, researchers can identify and analyze pedestrian behavioral patterns and the characteristics of spatial–temporal motion [20,21]. The second has been to use a classic social force model [22,23] and field model [24] to model, simulate, and analyze pedestrian behavior in specific situations [25]. Third, controlled real-world crowd experiments [26] have been designed with particular research goals in mind, in order to obtain observations that faithfully represent pedestrian movement patterns in the real world. Fourth, virtual reality experiments provide a new methodology for crowd behavior research. The data collected in a virtual environment can be used not only to validate and calibrate existing models [27], but also for further data mining.
Real-world evacuation experiments are difficult to implement for two reasons. First, it is difficult to resolve the safety issues in evacuation experiments, which would involve the participation of real people. Second, it is difficult to capture and quantitatively describe the spatiotemporal behavior observed in emergency scenarios. Virtual geographical experiments (VGExs) are becoming a promising research method. Immersed in a helmet-mounted VGE, the user not only has a strong sense of presence and immersion, but can flexibly control an avatar to evacuate in an emergency scene. This new form of experiment not only avoids the aforementioned safety problems, but can also faithfully capture the behavioral characteristics of real people. This would greatly facilitate behavior analysis and rule discovery. However, most of the current emergency drill systems have been designed only for single users [15,28]. There are few multi-user collaborative experimental platforms to support the study of crowd behavior.

1.2. Related Works

Helmet-mounted VR devices can provide participants with a strong sense of immersion and real-time interactivity, which can enhance the effectiveness of existing research. Moussaïd et al. [29] constructed a desktop personal computer (PC)-based multi-person collaborative virtual environment. They carried out crowd movement experiments in both real and virtual scenes. They also proved the feasibility of using a shared 3D virtual environment to carry out crowd experiments by involving real people. A cave automatic virtual environment (CAVE) is an immersive virtual reality environment where projectors are directed to between three and six of the walls of a room-sized cube [30]. Weiya Chen [31] used a large CAVE-like system to design an immersive, multi-user, and multi-sensor virtual environment. The system used infrared devices to track user movements, and shutter glasses to provide immersion and access to head activity data. However, desktop PC-based VR [29] is less immersive than state-of-the-art helmet mounted display (HMD) VR, and large CAVE-like systems [31] are expensive.
VR experiments have been widely used in crowd evacuations and fire drills. These use cases can be divided into three categories according to their purposes: (1) the use of VR as a general-purpose tool for emergency research and analysis [32]; (2) the use of VR to study human behaviors under fire situations such as route choice [33] and decision-making [34]; and (3) the use of VR to analyze environmental factors in fire evacuations, such as the location, form, quantity, and layout of evacuation aids [35]. If VR experiments can be conducted in multi-user collaborative environments, the virtual world will be greatly enhanced with realistic social interactions, and therefore may improve the validity of experiments.
Currently, there are many multiplayer online communities and games, such as Second Life and World of Warcraft. However, they are not suitable for user behavior research. On the one hand, we cannot obtain user behavior data from them. On the other hand, they are not designed for a specific experimental purpose.
In reality, it will be a long time before the widespread adoption of helmet-mounted VR devices. This means that PC-based systems and helmet-mounted VR will continue to coexist. Therefore, to carry out large-scale virtual experiments for spatiotemporal crowd behavior research, we need to solve the compatibility problems arising from heterogeneous clients. The heterogeneous issues mainly come from the following three aspects. (1) The computational capacity varies greatly across clients. For example, PCs usually have a much higher performance than all-in-one VR. Therefore, how can different configurations of heterogeneous computing platforms be made compatible with each other? (2) Participants engage in experiments with heterogeneous interactions. How can the consistency of user experience across different clients be ensured? (3) The local area network (LAN) is not an ideal network environment for organizing large numbers of users to participate in spatiotemporal behavior experiments. In this case, we must use the Internet. So, how can a heterogeneous network environment be supported?
In this paper, we propose several optimization solutions to address the heterogeneous problems identified in the previous paragraph. We use these optimization strategies to ensure that our system runs smoothly, even on low-configuration computing platforms. We propose an abstract interaction layer to adapt to the interactions of heterogeneous devices, and use protocol-based interaction algorithms to ensure the consistency of user experience across different clients. We use a distributed architecture design to solve the problem of the non-synchronization of user states caused by network delays, so that the system can be deployed in heterogeneous networks. Fire drills are used as a case study to demonstrate the applicability of the presented framework. This design allows us to conduct virtual experiments using both the LAN and the Internet, which could facilitate participation from different geographic locations. The specific algorithms will be described in Section 3.2.
Geography is a scientific discipline that studies the Earth, its inhabitants, and their interactions at multiple spatial scales, from the whole planet to urban space. The proposed framework aims to support human spatial behavior research at urban scales both in outdoor and indoor environments. The subway scenario, which is intended specifically to demonstrate the functionality and basic features of the proposed framework, is typical of indoor urban space and considered a type of micro-scale geographic environment.
In Section 2, we introduce the conceptual framework and design principles of a heterogeneous distributed virtual geographic environment (HDVGE). Then, we design the overall architecture and key technologies of the experimental platform in Section 3. In Section 4, we implement a prototype system for crowd evacuation in a subway fire scene. Through performance analysis, we demonstrate the capability of this prototype system to support heterogeneous distributed virtual experiments for spatiotemporal crowd behavior experiments. The proposed system can serve as a data collection tool for further behavior analysis. In Section 5, we discuss the key issues in the experiments. In the end, we summarize the conclusions, and highlight future work.

2. HDVGE Conceptual Framework

2.1. Conceptual Framework

The human–environment relationship refers to the relationship between the existence and development of human society, or the human activities and the geographical environment [36]. The geographical environment here is considered to be the entire geographic environment, encompassing natural and human elements. They are intertwined in accordance with certain rules, and are closely integrated. The literature [37] suggests that VGEx can be categorized into virtual natural geographic experiments and virtual human geographic experiments. Virtual natural and human elements constitute the “environment”, while individuals, groups, and society in the VGE constitute the “human” in a virtual experiment. Due to network and other technological limitations, collaborative VGEx, even based on distributed technology, cannot normally support the number of concurrent users of a real human society. However, virtual crowd experiments at the individual or group level could be affordable. According to the human–environment relationship theory [37], we categorize the VGEx into three types of elements: human, entity, and environment. These elements and their relationships are shown in Figure 1.
  • Virtual Human: HDVGE regards the human as the core, and emphasizes the subjectivity of human beings. The virtual human in the HDVGE includes avatars controlled by real users, and agents driven by computer programs. They can interact with other elements as well as themselves, such as interactions between avatars or avatars and agents. Multiple virtual human form a group through social relations, roles, and tasks. Massive groups in collective activities form a crowd.
  • Environment: The environment is the background of the VGEx, and is the static part of the VGE. We mainly use 3D models to simulate the real geographical environment, including terrain, vegetation, architecture, etc., which constitute the physical part of the VGE. Similar to the real environment, it can be perceived and recognized. At the same time, different environments will constrain a virtual human’s behaviors.
  • Entity: Entity refers to the dynamic variable entities in VGE. They have the ability to interact with humans. They can not only be perceived by humans, they can also provide feedback to the humans. For example, small obstacles in the evacuation process, such as desks and chairs, can affect the route choice of virtual humans; meanwhile, their state variables can also be changed by the humans. An essential issue in HDVGE is how the consistency of entity state variables in heterogeneous clients can be maintained.
In addition, HDVGE includes an important non-physical element: the event. An event can not only express natural and human geographic processes, it can also express their interactions, such as the transfer of crowds in the course of mountain floods, and the evacuation of crowds during subway fires. Events are represented by the updating of attributes of entities and environments. They can be perceived by humans, and they can also drive people to respond. From the experiment organizer’s point of view, virtual instruments are their tools for observing and documenting the spatiotemporal states of various elements in virtual experiments.

2.2. HDVGE Design Principles

In the literature [38], the similarities in human behavior between the virtual and real worlds have been studied, and a mapping principle has been proposed. As the research purposes and objects differ from one virtual world to another, a research frame based on four aspects has been put forward: group size, traditional controls and independent variables, contextual and social architecture factors, and directionality. Similarly, the virtual–real geographic similarity principles were also established from four aspects [37]: geographic space–time, geographic attribute, geographic attributes group, and geographic spatial cognition. The higher the similarity between the virtual and real, the closer the user behavior in the virtual experiment is to the real world. From the above principles, we propose the following HDVGE design principles.
  • Similarity in geographic space–time: The time and space in VGE should be similar to that in the real geographic environment (RGE). That is, the spatial scale and time scale in VGE should be identical to the real ones. This similarity provides principles for modeling the 3D virtual environment and process simulation. It requires strict reference to the size and proportion of real space when modeling VGE. Additionally, the time scale of the VGE cannot be changed.
  • Similarity in spatial attributes: The spatial attributes and distributions of entities and processes in the VGE should be similar to that in the RGE. VGEx includes the simulation of processes in natural geography and human geography. This principle stipulates that the modeling and presentation of objects and geographic processes should be similar to reality.
  • Similarity in group composition: Through the observation of pedestrians in public places [39], at least 70% of pedestrians in a given population are not traveling alone, but walk in groups. In a VGEx for spatiotemporal behavior research, group composition and member attributes must be similar to reality. Due to the limitations of the 3D modeling, VGEx cannot provide every user with an elaborate avatar. We use avatar models that are easy to discern to represent the members of the group, while those who are outside the group use an avatar with a different appearance. This similarity provides the basis for group modeling, observation, record, and analysis.
  • Similarity in perception: The subject of a VGEx perceives the environment, entities, other subjects’ spatiotemporal positions, attributes, and group relations from a first-person perspective. The results of this perception process should be similar to the perception results of a RGE. For example, during a fire, the subjects’ perceptions of the evacuating crowd, and their own companions in the VGEx, should be similar to those in an actual fire environment. This similarity could stimulate the subject to behave similarly to reality. Therefore, this similarity provides the framework and principles for virtual scene design, process simulation, and interactions between multiple subjects.
In addition, HDVGE designs are also constrained by factors such as VR device performance, usability, and user experience [40]. The design of an HDVGE should consider these aspects in an integrated manner. The ultimate design would reflect a compromise between the various factors mentioned above.

3. HDVGE Architecture and Key Technologies

3.1. HDVGE Architecture

According to the framework and design principles described in Section 2, which consider user experience, system performance, and research objectives, we propose a detailed architecture for an HDVGE, as shown in Figure 2.
• Collaborative Server
A collaborative server is primarily responsible for providing network services to the HDVGE. The VGEx for spatiotemporal behavior research often requires multiple participants distributed in different geographic locations. With this in mind, we adopted an authoritative server-based architecture, instead of a peer-to-peer client. The server supports deployment in heterogeneous network environments, in order to meet the experimental needs in either a LAN or wide area network (WAN). The authoritative server maintains the states of all of the objects in the virtual scene, and is responsible for computing and updating them. Each operation on a client requires sending a synchronization request to the server. The server periodically performs object status verification, computation, updating, and then sends the latest statuses back to the target client. Due to the performance of heterogeneous clients varying enormously, this architecture shifts computation stress from the client side to the server side, enabling clients to focus on the high-fidelity rendering of HDVGE. In addition, the architecture is easy to scale, given the possibility of large numbers of concurrent users.
The collaborative server can provide a variety of services. The transmission control protocol (TCP) service is mainly used for frequently updated user data, such as the location, orientation, and action of the avatar. The hyper text transfer protocol (HTTP) service is mainly used for transactional network requests, such as user authentication and background management. Voice communication between users in a group is achieved through voice services. The database is mainly used to store structured data during the experiment, while the storage cluster is mainly used to store unstructured data, and data that needs to be serialized. The logging module periodically records the status and behavior of all of the users in the experiment. Due to the high real-time requirements and interactivity, when there are a large number of massive concurrent users, the load balancing module is responsible for the scheduling of computing resources and storage, and distributing the tasks according to the actual situation.
• Heterogeneous Clients
Although all-in-one VR offers a better sense of immersion, its availability and performance remain limited. As large numbers of participants are required in spatiotemporal crowd behavior experiments, the system has been designed to be able to interact with heterogeneous clients so that PC users can be included. All-in-one VR uses high-precision sensors and gamepads as input devices, and HMD as the output device, while a PC uses the traditional mouse and keyboard as input devices, and the monitor as the output device. To accommodate interactions between heterogeneous clients, we designed a device-oriented graphic user interface (DGUI). On a PC, it appears as a screen-space user interface, while an all-in-one VR has a view-centered user interface that follows the user’s head movement. The DGUI is mainly used to display user-related information. The interaction algorithm for heterogeneous clients will be described in detail in Section 3.2.
As an HDVGE is a common workspace shared by multiple users, the 3D modeling of the environment must be sufficiently photorealistic. In light of the limited computing and rendering capabilities of the heterogeneous clients, 3D models and scene rendering must also be optimized to meet the requirements of user experience and system availability. A heterogeneous distributed virtual environment is a trade-off between high fidelity and availability. As an important reference point for users to perceive the virtual environment, avatars’ skin, bones, and animations also need to meet the above requirements. We distinguish between different groups of avatars using easily identifiable colors of clothing. The avatars’ skeletal animations meet the non-verbal communication needs between users in the HDVGE.
We add a logging module to the heterogeneous client, which is responsible for recording the local user’s locations, orientations, actions, and other status information. This module is different from the server-side logging module, which records the status data for all of the users. Although the log data have some degree of redundancy, it increases the reliability of data logging.
The network synchronization module is used mainly to communicate with the server. Data transmission is based on the TCP protocol. Heterogeneous distributed clients perform collaborative tasks through the network. Its real-time and interactivity are important factors that affect the user experience. The architecture of the HDVGE uses the ideas of authoritative server and dumb client. All of the clients will send their own status changes to the server, and then, the collaborative server forwards them to each client as requested. This architecture can reduce the client’s computing pressure and avoid cheating. However, the disadvantage is that since all of the data sent and received must first go through the collaborative server, the overall performance is greatly affected by the speed of the network. Therefore, the client prediction algorithm is very important. The algorithm will be described in detail in Section 3.2.

3.2. Key Technologies

3.2.1. Abstract Interaction Layer for Heterogeneous Clients

Interactive devices and methods vary greatly between HDVGE clients. The interactive devices of PC clients include the keyboard, mouse, and monitor. They use the mouse to control the viewpoint rotation, and use the keyboard to control the viewpoint movement, trigger skeletal animation, and perform other actions. The interactive devices of all-in-one VR clients mainly use the HMD, gamepad, and stereo screen. The HMD is equipped with a high-precision nine-axis sensor, which is a combination of three sensors: a three-axis accelerometer, a three-axis gyro, and a three-axis electronic compass. Among them, the HMD mainly uses the three-axis gyroscope to measure, obtain the attitude parameters of the helmet, and then reconstruct the user’s 3D motion. That is, the user controls the rotation of the viewpoint using the HMD, controls the viewpoint movement using the gamepad, and triggers the skeleton animation using buttons. Although the devices and methods vary considerably, the ultimate goals are the same. Therefore, in order to be compatible with different interactions between heterogeneous clients, we designed an abstract interaction layer (AIL), so that the different interactive methods can achieve the same results. Figure 3 shows a typical interaction process. The left and right sides of the figure represent the interaction process of the all-in-one VR and the PC client, respectively.
The AIL is a collection of predefined actions. It is responsible for converting interactions from heterogeneous clients into standard actions. First, we define the standard actions that can be recognized by the system. Second, we establish a mapping relationship between the various types of operations from heterogeneous clients and AIL standard actions. This mapping relationship bridges the differences between heterogeneous clients, and guarantees that different operations from heterogeneous clients can produce the same effect. Third, according to the specific interaction action, each client computes and updates the rendering scene in its own heterogeneous computing platform. Finally, the rendering results are sent to the client’s display device.

3.2.2. Protocol-Based Interactions between Heterogeneous Clients

The standard actions defined in AIL also provide a common language for interactions between the heterogeneous clients. For heterogeneous clients to communicate and interact with each other, we propose protocol-based interactions to implement the collaboration between the heterogeneous clients. The agreement is a data structure that the system appoints in advance for data exchange between the clients. It can be understood and applied by each client in its own form, thereby masking the differences between clients.
A typical data transmission process based on a custom protocol is shown in Figure 4. First, a user of an all-in-one VR changes his status locally. Then, based on the type of heterogeneous client, this interaction is mapped to standard actions by the AIL. Next, the system uses the custom protocol to encode the standard action. To improve network transmission efficiency, the encoded result is converted to binary form before being transmitted to the server. After receiving the status update from client 1, the server calculates and processes the status information, and then forwards it to other clients in the current scene.
Let’s take PC client 2 as an example. After receiving the binary data, client 2 first converts it to text data and decodes it. Then, it continues to restore the interaction of client 1 according to the custom protocol, and updates the local client 1 status. Finally, based on the latest status of client 1, client 2 performs the calculation, rendering, and output display, and responds according to its needs and feedback. All client-side interactions are similar to this.

3.2.3. Adjusted Dead Reckoning Algorithm for Client Prediction

The network environment in which the distributed client is located to a large extent determines the system’s real-time experience. Network latency is a key issue affecting the overall performance of the system. In an HDVGE, not only does the status of the virtual avatars in the client need to be synchronized, but many other scene elements also require consistent maintenance through the server, such as interactions between avatars and entities, user entry and exit events, and the instantiation and deletion of networked entities. Virtual scene maintenance can ensure the consistency of scene, entities, avatars, and other elements in each client to avoid user perception differences caused by distributed clients.
A typical client state synchronization process of an HDVGE is shown in Figure 5. We assume that the network latency for each client is consistent throughout the process. The initial position of client 1 is (10, 10), and it moves one unit along the x-axis. Client 1 sends a new status to the server while moving the local avatar. The data reaches the server after t1 time. Then, the server receives the new status sent by client 1, and starts to broadcast to other clients. The broadcasted data reaches client 1 after time t2, and reaches client 2 after time t3. At this point, client 2 can see the new status of client 1. In the process, the network delay of client 1 is (t1 + t2), while the new status of client 1 reaches client 2 after (t1 + t3). That is, the status of client 1 as seen by client 2 is actually the former’s status before time (t1 + t3).
An out-of-sync status between distributed clients can cause serious problems. Assume that in the HDVGE for a crowd evacuation experiment, users need to control the avatars to escape quickly from the scene of the fire. Client 1 and client 2 are in one group, and need to escape together. Client 2 sees the status of client 1 before time (t1 + t3), which is slightly behind client 2. Thus, client 2 stops and waits for client 1. However, in fact, client 1 has already come to the front. At this point, client 1 sees client 2 falling behind. Therefore, client 1, in turn, will need to stop and wait for client 2. It can be seen that the out-of-sync status between distributed clients will eventually make client 1 and client 2 stop moving. We need to predict the state of the next moment based on the client’s current state, so that the avatars of different clients appear to be synchronized. We also need to minimize deviations between the true and the predicted values.
There are many algorithms that predict a moving object’s future states based on the latest state. The most widely used are dead reckoning (DR) and the Kalman filter (KF). DR is an algorithm to predict the motion parameters of a moving object. It predicts the state of an object based on the latest position, velocity, and acceleration, and is widely used in the fields of aviation and navigation [41]. Curtiss Murphy [42] uses projective velocity blending, which mixes the newly acquired velocity with the current velocity, and predicts the position combined with the time variable. The KF is an algorithm that estimates the state of a system from measured data. It is commonly used in guidance, navigation, and control systems. In computer vision applications, the KF is used for object tracking to predict an object’s future location, account for noise in an object’s detected location, and help associate multiple objects with their corresponding tracks [43]. Comparing the two, the DR algorithm is more widely used, and has been applied in networked games [44,45]. Therefore, we adjusted the DR algorithm to provide client-side predictions, and compare its performance with KF.
The algorithm proposed by Curtiss Murphy [42] uses a fixed update rate to implement the DR algorithm. In HDVGE, the client uploads the data to the server only when its status data changes. Then, the server pushes the data to other subscribed clients. Thus, the client status data are not sent and received at regular intervals. Due to the asynchronous updating scheme, we cannot use the original algorithm to calculate the velocity-blending factor needed for predicting the future position. Therefore, we developed a modified form of the algorithm. The formulas of the modified algorithm are shown in Formulas (1)—(4).
V b = V 0 + ( V 0 ' V 0 ) T w
P t = P 0 + V b T t + 1 2 A 0 ' T t 2
P t ' = P 0 ' + V 0 ' T t + 1 2 A 0 ' T t 2
Q t = P t + ( P t ' P t ) T w
The velocity-blending factor Tw is a normalized value that is determined according to the client data update time. Formula (1) calculates the blended velocity V b using the velocity-blending factor, where V 0 represents the current velocity, and V 0 ' represents the last known velocity. Formula (2) projects the future position P t after Tt from the current position P 0 , the blended velocity V b , and the latest known acceleration A 0 ' , where Tt represents the time elapsed since the last data update. Formula (3) projects the future position P t ' after Tt based on the last known position P 0 ' , last known velocity V 0 ' , and last known acceleration A 0 ' . Formula (4) blends the results of Formulas (2) and (3) to obtain the final projected position Qt.
As seen from the formulas, the predicted position is a linear combination of the current position and the known position. During the operation of the system, each time the data are updated, new data will be used to correct the current data, reduce errors, and improve accuracy. We need to adjust the velocity-blending factor based on the actual system data update time. The larger the Tw value, the greater the weight of the current positions; the smaller the Tw value, the greater the weight of the last known position.

4. Prototype System

4.1. Heterogeneous Distributed Virtual Evacuation Prototype System

Based on the above architecture design and key algorithms, we implemented a heterogeneous distributed virtual evacuation prototype system. The system is based on a subway fire scene, and can support multi-user collaborative virtual evacuation drills. The server side of the system uses the Smartfox Server as the TCP server for data synchronization, and Flask as the HTTP server. The client side uses the Unity3D game engine as the development platform. We use the all-in-one PicoVR and a mid-range PC as a heterogeneous interaction and computing client. The refresh frequency of the PicoVR HMD is 90 Hz. The monocular resolution is 1200 × 1080, and the field of view is 102 degrees. The all-in-one is equipped with a gamepad. With the high-precision nine-axis sensor in the HMD, the system enables three degrees of freedom interactions. The price of this all-in-one device is about $450, and the main hardware is a Qualcomm Snapdragon 820 CPU, Adreno 530 GPU, 4 GB of RAM, and Qualcomm QCA6174A wireless card. As a result, the all-in-one no longer requires high-performance PC support, which reduces experiment costs.
The virtual scene mainly consists of a manually modeled 3D subway station, in which the platform is approximately 8 m in width and 90 m in length. There are four exits in total, which are labeled with A, B, C, and D, and located at the two ends of the platform. There are round pillars in the middle of the platform. A fire breaks out on one side of the subway; thus, two exits on one end are blocked. Considering the weak performance of the all-in-one, the prototype system does not use real-time lighting. Instead, the system uses some area light sources and bakes the lighting into light maps. In order to improve the fidelity of the fire scene, the system uses particle systems to simulate heavy black smoke. At the same time, the system establishes weak lighting to create a low-visibility scene. Accompanied by a sharp fire alarm, the system creates a sense of urgency to the user both visually and audibly. The pillars in the middle of the platform have a clear exit-point marking, which is self-illuminated to ensure clarity.
The system uses low-precision 3D models as avatars. In order to reduce the amount of data that needs to be processed when rendering a large number of avatars in the VR system, we have simplified the mesh of the high-precision avatar model while keeping the texture, skinning, and skeleton information. The number of triangles in each of the low-precision models is between 600 and 700, but the appearance and action features of them are comparable to those of the high-precision models. To make it easy to distinguish an avatar’s group information, we designed the coat texture of avatars of the same group to have the same color. We also designed three skeletal animations for each avatar, where “run” is used to represent the user’s escape animation, “idle” is used to indicate that the user is not moving, and “greet” is used for non-verbal communication between the group members. We have implemented two different modes of interactions, all-in-one VR and PC, both of which use the first-person perspective. We use the device-oriented graphical user interface (GUI) to display the local user group, flag, correct exit, evacuation countdown, system prompts, and other news. Both types of clients record the position, orientation, movement, and other status data of the avatar in 0.3-s intervals. Figure 6 is a prototype system diagram.
Since the LAN environment is pure, and it is easy to simulate complex conditions such as network latency, the prototype system server is deployed in the LAN. At the same time, we implement HTTP services to complete user authentication, configuration parameters, and so on. We define in advance the data structures of the request, and the response between server and client, which are used to transfer data such as status and message between the local and remote users. The server records the status, actions, and events of all of the users in the virtual scene, according to the data sent by the user.

4.2. Performance Evaluation of Key Algorithms

Since the AIL and protocol-based interaction algorithms for heterogeneous clients cannot be measured using numeric values, we have supplemented the implementation of the prototype system. The availability of the system can prove the effectiveness of our additions. Therefore, only the adjusted dead reckoning (ADR) algorithm is evaluated here.
• Adjusted dead reckoning algorithm
To test the accuracy of the algorithm, we conducted a small-scale crowd evacuation experiment, in which participants’ trajectory data were collected in a virtual environment. Five participants were invited to take part in the evacuation experiment, with three repetitions using the PC clients in the LAN. All of the users entered the subway scene at exactly the same time, and were informed of the target exit in advance. The users were instructed to navigate from the platform center to the target exit, and would need to climb up staircases and pass through gates in between. The trajectories recorded by the prototype system are time series data, with each record containing the following attributes: UserID, Timestamp, Position X, Position Y, Position Z, and Action. The time interval between each sample is 0.3 s.
Taking the trajectories of user activities collected in the experiment as an example, we implement the KF, with a constant acceleration model and ADR for position prediction. In order to study the running time and prediction accuracy of the algorithm under different update frequencies, we used a uniform distribution of fixed intervals to simulate the update frequency with some randomness. We use the total time consumed by the prediction algorithm to evaluate the algorithm’s time complexity, and use root mean squared error (RMSE) to measure the deviation from the observed value to the true value.
The test results are shown in Figure 7. In this test, we implement the ADR algorithm that takes a velocity blending factor of 0.3. This means that the predicted value receives higher priority than the latest updated position data. As seen from Figure 7a, from the accuracy of the algorithm, the prediction error of the ADR algorithm is smaller than the KF algorithm by an average of 0.961 m, which is lower by 31.76%. From the running time, the single run time of ADR is almost negligible, while the KF algorithm requires an average of 0.23 ms. This is because KF needs to update the state transition model and covariance model in each time step, and performs a series of matrix multiplications. Therefore, we recommend that the ADR algorithm should be considered in 3D rendering programs that require high real-time performance. Figure 7b shows the prediction results of the ADR algorithm and the KF algorithm. The XZ plane is the user’s activity plane, and the positive Y axis represents the height value. The error in the prediction trajectory of the KF algorithm increased after the avatar moved vertically. The trajectory predicted by the ADR algorithm is subject to unsatisfactory accuracy when the avatar’s acceleration changes, but the error is quite small in other places. This is more in line with the actual situation.

4.3. System Overall Performance Test

To verify the usability of the prototype system, we conducted an overall system performance test. Here, an important question we aimed to address is: with the heterogeneous clients of general configuration, how many concurrent users can the system support in distributed virtual experiments? We assume that the hardware and networks of all of the clients are the same. The performance metrics of the prototype system are mainly influenced by the number of concurrent users and the lag of packets. Therefore, we take them as two factors used in a factorial experiment. The number of concurrent users includes five levels, namely, 10, 30, 50, 70, and 90. In order to simulate different network environments, we used software to add delays to the packets sent and received. The packet lag includes four levels, namely, 0, 10, 20, and 30 milliseconds; that is, each replicate of the experiment contains all 20 treatments, and each treatment contains five replicates.
The prototype system performance indicators include resource consumption, rendering pressure, and network latency. Resource consumption is measured by CPU usage, memory usage, and network throughput. The indicator of rendering pressure is the frame per second (FPS) of the client. Network latency is measured by the overall latency recorded by the client. The hardware configurations relating to the test are as follows. (1) PC configuration: Intel (R) Core (TM) i5 750, NVIDIA GeForce GTX 650 and 8 GB RAM. (2) The all-in-one VR is described in Section 3.1. (3) Server configurations: Intel (R) Core (TM) i7 6700, NVIDIA GeForce GTX 1060, 8 GB RAM.
Two participants were invited to take part in the performance test, where one participant used a PC client, while the other used an all-in-one VR client. To simulate a large number of concurrent users, we developed a user agent that can communicate with the server in real time, update the avatar’s location randomly, and upload and download the avatars’ location and status. One end of the subway evacuation route was filled with thick smoke. When the test started, the user agent first generated a specified number of simulated users. They were evenly distributed in the scene and moved randomly. The two participants, who used a PC and a VR client, respectively, were informed of the correct exit in advance. They controlled their respective avatar to navigate through the crowd, and finally reached the other end of the subway platform, which was filled with smoke (Figure 8). The server and the clients of two types recorded the resource consumption during the running of the program. The clients additionally recorded the frame rate and the overall network delay data. The result of each test is the average of each parameter in the process. We took the average of each indicator in each test as the test result.

4.4. Data Analysis

As hardware and computing capacity vary greatly between servers, PCs, and all-in-one VR, the evaluation indicators are also different, and we will discuss them separately.
• Server side
The resource consumption of the prototype system, with different numbers of concurrent users on the server side, is shown in Figure 9a–c. The system resource consumption increases with the number of concurrent users. With 90 concurrent users, this process takes up to 10% of the CPU. Memory usage increases significantly with the number of concurrent users, with the maximum being 320 MB. Network sent traffic (up to 3500 KB per second) is several times higher than network received traffic (up to 500 KB per second). This is because after the server received a user update, it sends the update to all of the other users. When the packet lag is 0, the network traffic both received and sent reaches the highest level. One of the possible reasons is that the latency has caused data packets to get stuck in the network, without reaching the server processing flow on time. Some packets are discarded due to timeout, and are no longer being processed. This results in a reduction in the total network traffic. In general, the system’s CPU usage is not high. This process does not occupy much of the server’s resources, and more concurrent users can be supported.
• PC side
Figure 10 shows the resource consumption of the PC client under different numbers of concurrent users and packet lags. Figure 10a shows that as concurrent users increase or as packet lags increase, CPU usage does not increase significantly. Figure 10b shows that memory usage is mainly affected by the number of concurrent users. Since the client only needs to send the status data of the local user, the network sent traffic is stable. On the other hand, the client needs to receive the status data of all other remote users, so there is an increasing process in Figure 10c as the number of concurrent user increases. However, an increase in packet lags causes network congestion and some data is discarded. With the increase of concurrent users, the decline in the FPS rate in Figure 10d is obvious, but it is still at a very high level. It can be seen from Figure 10e that in the absence of packet lags, the increase in the number of concurrent users has no effect on the network delay on the PC side. However, once the packet lag is introduced, the impact of both factors on the overall network delay is approximately logarithmic. Overall, the prototype system is stable on this medium-configured computer.
• VR side
Since the all-in-one PicoVR system is Android, its performance is measured in a slightly different way. In the following indicators, CPU Time refers to the average of the total time consumed in the most recent 30 frames. The larger the value, the greater the overall pressure on the device. Memory refers to the used heap size. FPS has been limited by their software development kit (SDK), up to 60 FPS.
We can see from Figure 11a that the number of concurrent users and packet lags have no significant impacts on CPU time consumption. The used heap size in Figure 11b increases with concurrent users, and has no obvious relationship with packet lag. The network traffic in Figure 11c is similar to the PC client. The FPS rate in Figure 11d shows a decreasing trend with the number of concurrent users, but has no obvious relationship with packet lag. This shows that the packet lag does not affect the FPS. That is, packet lag can lead to poor interactivity, but it does not affect the real-time user experience. Figure 11e shows that the impact of these two factors on the overall network delay in the VR client is also logarithmic.
In summary, the overall performance of the server, PC client, and all-in-one VR is stable and less demanding on system resources. The number of concurrent users does not have a significant effect on the overall performance. The main bottleneck of system expansion depends on the performance of the all-in-one VR. Packet lag has a great impact on the overall network delay, and will make system performance decline rapidly. Conditions such as network congestion and packet loss will have a negative effect on the system performance and user experience. In an actual experiment, high network latency should always be avoided.

5. Discussion

Organizing large numbers of people over a network to conduct virtual experiments is a challenging task. The LAN environment is pure and has low network delays, generally less than 20 ms. Our HDVGE prototype system supports the participation of 90 or more concurrent users in collaborative virtual spatiotemporal behavior experiments. The main limitation is the performance of the all-in-one VR. Under the conditions of 90 concurrent users and no packet lags, the PC client can maintain a rendering performance of approximately 300 FPS, while the all-in-one VR can only run at approximately 20 FPS, which could barely meet the requirements for a real-time user experience and interactivity. With the continuous development of hardware, all-in-one VR will become more powerful and better meet the experimental requirements. At present, the heterogeneous distributed architecture is probably the most effective option to conduct virtual experiments with high numbers of concurrent users.
The Internet is a complicated network environment subject to high latency due to the large number of users distributed around the world. Network delay has more influence on real-time and interactivity than numbers of concurrent users. It should be noted that overall network delay and packet lag is not a simple linear relationship. This shows that the Internet experimental environment may introduce more complex factors, which would make it more difficult to meet the real-time and interactivity requirements of HDVGE. When conducting virtual experiments on an HDVGE, the network environment should be selected according to the actual experimental needs.
When conducting virtual spatiotemporal crowd behavior experiments in emergency scenarios, a very important question to consider is: how can tension be created for the participants? As mentioned in the literature [29], there are several main ways. First, create more realistic emergency elements, such as dim lights and thick smoke; second, set a time limit using a countdown to urge participants to escape; third, develop experimental policies, such as the shorter the time needed for a successful evacuation, the better the payoff. In practice, these methods all play a role in the experiment, but the immersion and presence brought by the HMD VR device can provide a better user experience. For non-immersive devices, the 3D virtual environment on the computer screen is independent of the participant’s cognitive space. While in a helmet-based VGE, the virtual environment space and cognitive space are closely coupled, so that the user’s cognition of the virtual world is consistent with that of the real world.
Rendering quality is also an important factor to consider in system performance balancing. At present, a PC client normally has several times more graphics processing power than a VR client. To ensure the prototype system run smoothly on the all-in-one VR, we applied a variety of rendering optimization methods, including scene model simplification, character model simplification, and baked lighting. That means that the visual quality was sacrificed in exchange for a stable frame rate on the VR system. If the quality of rendering is too high, the usability of the system will be reduced. The current heterogeneous distributed virtual environment is a trade-off between high fidelity and availability.
Technically speaking, there are many alternative VR devices that can be used as heterogeneous clients in the proposed framework. For example, cardboard VR headsets have gained much popularity due to the good immersion experience and low cost. However, their limitations are also obvious. On the one hand, cardboard VR headsets depend heavily on the mobile phone’s performance both for computing and rendering. At present, the performance of mobile phones varies greatly across brands. It is therefore a great challenge for users to experiment with cardboard VR headsets. On the other hand, cardboard VR headsets provide only limited interactivity. The only type of interaction that they support is reacting to the user’s head movements detected from the sensors of the phone. For more complex interactions, such as scene roaming and interactions between users, they need to work in collaboration with other devices. These challenges need be addressed in order to effectively incorporate cardboard VR headsets into HDVGE.
Additionally, the validity of the virtual behavior experiments is somehow dependent on the sense of presence produced by the immersive VR environment. The sense of presence relies not only on visual and auditory stimuli, but also on tactile, olfactory, and haptic stimuli. Therefore, in order to create a stronger sense of presence, or even a sense of full immersion, the VR system should ideally be able to synchronously produce multi-channel perceptions with regard to the five senses. A stronger sense of presence could potentially lead to a higher similarity in user behavior between the virtual and real world, and therefore enhanced experimental validity.
Human–computer interaction can be achieved through different user interfaces, such as mouse, keyboard, gamepad, and headset interfaces. Existing research [46] has shown that different ways of interaction could lead to different human behavior patterns in virtual environments. The same issue could potentially arise in HDVGE with the use of heterogeneous devices, particularly when a HMD is used. A major concern with HMD is that the user cannot see the mouse and keyboard when wearing the headset. This may constitute an important source of error in HDVGE-based experiments. Future work is needed to quantify the impact of interaction mode on human behavior, with an emphasis on HMD.

6. Conclusions and Future Works

In recent years, low-cost HMD VR devices are becoming popular, but they are not sufficiently widely available. Hence, they will coexist with PC-based systems for a long time. In order to solve the heterogeneous problems caused by various types of clients, and to support the implementation of virtual spatiotemporal crowd behavior experiments with large numbers of concurrent participants, the HDVGE represents a feasible solution. In this paper, we present HDVGE as a practical solution, and demonstrate the technical feasibility of HDVGE.
First, we have proposed an HDVGE framework for spatiotemporal crowd behavior experiments, and analyzed the design principles of the HDVGE platform based on this framework. We then designed the architecture and the key technologies of the experiment platform. Finally, using a subway fire as an example, we implemented an HDVGE prototype system for crowd evacuation. Through testing and analyzing the key algorithms and overall performance, we demonstrated the effectiveness of the proposed system.
The results show that in a low-latency LAN environment, the system could support 90 concurrent users for collaborative virtual experiments as heterogeneous distributed clients. System performance bottlenecks were dependent on the all-in-one VR. Packet lag had a great impact on the overall network delay, and would result in a rapid decline in the system performance, leading to further issues such as network congestion, packet loss, etc. In actual experiments, high-latency network environments should be avoided.
We have shown that the HDVGE platform can effectively support heterogeneous clients and multi-user collaboration. We expect to see applications not only in large-scale spatiotemporal behavior research under normal conditions, but also evacuation drills under emergency conditions such as fires or earthquakes. These types of experiments cannot normally be conducted in VR environments without the support of multi-user collaboration. The HDVGE could also serve as a new means of obtaining observational data on individual and group behaviors. Future work will include the following considerations. (1) Crowd behavior varies greatly in different scenes; therefore, analyzing the behavioral data of individuals and groups in different scenarios may lead to different conclusions. (2) Compared with the data of a real scene experiment, we will analyze the similarities in the behaviors between the virtual and real scenes. The ongoing study of such additional factors will contribute to the advancement of HDVGEs in ways that more closely mirror RGEs.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (41371387), National Key Research and Development Program of China (2016YFB0502502), National Key Research and Development Plan (2017YFB0503602), Pre-research Project of Equipment Development Department (315050501) and Internal Program of SLRSS China (Y7Y00200KZ).

Author Contributions

Shen Shen, Jianhua Gong, Jianming Liang and Wenhang Li conceived and designed the methods; Dong Zhang, Lin Huang and Guoyong Zhang performed the experiments; Shen Shen analyzed the data and wrote the paper; all the authors reviewed and edited the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Raggett, D. Extending WWW to support platform independent virtual reality. In Proc. Internet Society/European Networking; Internet Society Press: Reston, VA, USA, 1995; p. 242. [Google Scholar]
  2. Haklay, M.E. Virtual reality and GIS: Applications, trends and directions. In Virtual Reality in Geography; Taylor & Francis: London, UK, 2002; pp. 47–57. [Google Scholar]
  3. Lin, H.; Chen, M.; Lu, G.; Zhu, Q.; Gong, J.; You, X.; Wen, Y.; Xu, B.; Hu, M. Virtual Geographic Environments (VGEs): A New Generation of Geographic Analysis Tool. Earth-Sci. Rev. 2013, 126, 74–84. [Google Scholar] [CrossRef]
  4. Xu, B.; Lin, H.; Chiu, L.; Hu, Y.; Zhu, J.; Hu, M.; Cui, W. Collaborative virtual geographic environments: A case study of air pollution simulation. Inf. Sci. 2011, 181, 2231–2246. [Google Scholar] [CrossRef]
  5. Mallot, H.; Gillner, S.; Van Veen, H.; Bülthoff, H. Behavioral experiments in spatial cognition using virtual reality. In Spatial Cognition; Springer: Berlin/Heidelberg, Germany, 1998; pp. 447–467. [Google Scholar]
  6. Bülthoff, H.H.; Campos, J.L.; Meilinger, T. Virtual Reality as a Valuable Research Tool for Investigating Different Aspects of Spatial Cognition (Abstract). In Proceedings of the International Conference on Spatial Cognition, Freiburg, Germany, 15–19 September 2008; pp. 1–3. [Google Scholar]
  7. Notelaers, S.; De Weyer, T.; Goorts, P.; Maesen, S.; Vanacken, L.; Coninx, K.; Bekaert, P. HeatMeUp: A 3DUI serious game to explore collaborative wayfinding. In Proceedings of the 2012 IEEE Symposium on 3D User Interfaces (3DUI), Costa Mesa, CA, USA, 4–5 March 2012; pp. 177–178. [Google Scholar]
  8. Fajen, B.R.; Warren, W.H. Behavioral dynamics of steering, obstable avoidance, and route selection. J. Exp. Psychol. Hum. Percept. Perform. 2003, 29, 343–362. [Google Scholar] [CrossRef] [PubMed]
  9. Kretz, T.; Hengst, S.; Roca, V.; Perez Arias, A.; Friedberger, S.; Hanebeck, U.D. Calibrating dynamic pedestrian route choice with an Extended Range Telepresence System. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 166–172. [Google Scholar]
  10. Drachen, A.; Thurau, C.; Togelius, J.; Yannakakis, G.N.; Bauckhage, C. Game Data Mining. In Game Analytics; Springer: London, UK, 2013; pp. 205–253. ISBN 978-1-4471-4768-8. [Google Scholar]
  11. Guardini, P.; Maninetti, P. Better Game Experience through Game Metrics: A Rally Videogame Case Study. In Game Analytics; Springer: London, UK, 2013; pp. 325–361. ISBN 978-1-4471-4768-8. [Google Scholar]
  12. Medler, B. Visual Game Analytics. In Game Analytics; Springer: London, UK, 2013; pp. 403–433. ISBN 978-1-4471-4768-8. [Google Scholar]
  13. Gower, D.W., Jr.; Fowlkes, J.E. Simulator Sickness in the UH-60 (Black Hawk) Flight Simulator; US Army Aeromedical Research Laboratory: Fort Rucker, AL, USA, 1989; Volume 60.
  14. Brooks, J.O.; Goodenough, R.R.; Crisler, M.C.; Klein, N.D.; Alley, R.L.; Koon, B.L.; Logan, W.C.; Ogle, J.H.; Tyrrell, R.A.; Wills, R.F. Simulator sickness during driving simulation studies. Accid. Anal. Prev. 2010, 42, 788–796. [Google Scholar] [CrossRef] [PubMed]
  15. Cha, M.; Han, S.; Lee, J.; Choi, B. A virtual reality based fire training simulator integrated with fire dynamics data. Fire Saf. J. 2012, 50, 12–24. [Google Scholar] [CrossRef]
  16. Lovreglio, R.; Gonzalez, V. The Need for Enhancing Earthquake Evacuee Safety by using Virtual Reality Serious Games. In Proceedings of the Lean & Computing in Construction Congress, Crete, Greece, 4–12 July 2017. [Google Scholar]
  17. Natapov, A.; Fisher-Gewirtzman, D. Visibility of urban activities and pedestrian routes: An experiment in a virtual environment. Comput. Environ. Urban Syst. 2016, 58, 60–70. [Google Scholar] [CrossRef]
  18. Kuliga, S.F.; Thrash, T.; Dalton, R.C.; Hölscher, C. Virtual reality as an empirical research tool—Exploring user experience in a real building and a corresponding virtual model. Comput. Environ. Urban Syst. 2015, 54, 363–375. [Google Scholar] [CrossRef]
  19. Schrom-Feiertag, H.; Schinko, C.; Settgast, V.; Seer, S. Evaluation of guidance systems in public infrastructures using eye tracking in an immersive virtual environment. In Proceedings of the 2nd International Workshop on Eye Tracking for Spatial Research, Vi-enna, Austria, 23 September 2014; Volume 1241, pp. 62–66. [Google Scholar]
  20. Zhou, B.; Tang, X.; Zhang, H.; Wang, X. Measuring crowd collectiveness. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1586–1599. [Google Scholar] [CrossRef] [PubMed]
  21. Shao, J.; Kang, K.; Loy, C.C.; Wang, X. Deeply learned attributes for crowded scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4657–4666. [Google Scholar]
  22. Helbing, D.; Molnár, P. Social force model for pedestrian dynamics. Phys. Rev. E 1995, 51, 4282–4286. [Google Scholar] [CrossRef]
  23. Li, W.; Gong, J.; Yu, P.; Shen, S.; Li, R.; Duan, Q. Simulation and analysis of congestion risk during escalator transfers using a modified social force model. Phys. A Stat. Mech. Its Appl. 2014, 420, 28–40. [Google Scholar] [CrossRef]
  24. Treuille, A.; Cooper, S.; Popović, Z. Continuum crowds. ACM Trans. Graph. 2006, 25, 1160–1168. [Google Scholar] [CrossRef]
  25. Torrens, P.M. High-resolution space-time processes for agents at the built-human interface of urban earthquakes. Int. J. Geogr. Inf. Sci. 2014, 28, 964–986. [Google Scholar] [CrossRef]
  26. Hoogendoorn, S.P.; Daamen, W. Pedestrian Behavior at Bottlenecks. Transp. Sci. 2005, 39, 147–159. [Google Scholar] [CrossRef]
  27. Lovreglio, R.; Ronchi, E.; Nilsson, D. Calibrating floor field cellular automaton models for pedestrian dynamics by using likelihood function optimization. Phys. A Stat. Mech. Its Appl. 2015, 438, 308–320. [Google Scholar] [CrossRef]
  28. Wang, C.; Li, L.; Yuan, J.; Zhai, L.; Liu, G. Development of emergency drills system for petrochemical plants based on WebVR. Procedia Environ. Sci. 2011, 10, 313–318. [Google Scholar] [CrossRef]
  29. Moussaïd, M.; Kapadia, M.; Thrash, T.; Sumner, R.W.; Gross, M.; Helbing, D.; Hölscher, C. Crowd behaviour during high-stress evacuations in an immersive virtual environment. J. R. Soc. Interface 2016, 13, 20160414. [Google Scholar] [CrossRef] [PubMed]
  30. Cruz-Neira, C.; Sandin, D.J.; DeFanti, T.A.; Kenyon, R.V.; Hart, J.C. The CAVE: Audio visual experience automatic virtual environment. Commun. ACM 1992, 35, 64–72. [Google Scholar] [CrossRef]
  31. Chen, W. Collaboration in Multi-User Immersive Virtual Environments. Ph.D. Thesis, Université Paris-Saclay, Paris, France, 2016. [Google Scholar]
  32. Kinateder, M.; Ronchi, E.; Nilsson, D.; Kobes, M.; Müller, M.; Pauli, P.; Mühlberger, A. Virtual Reality for Fire Evacuation Research. In Proceedings of the 2014 Federated Conference on Computer Science and Information Systems, Warsaw, Poland, 7–10 September 2014; Volume 2, pp. 319–327. [Google Scholar]
  33. Kinateder, M.; Ronchi, E.; Gromer, D.; Müller, M.; Jost, M.; Nehfischer, M.; Mühlberger, A.; Pauli, P. Social influence on route choice in a virtual reality tunnel fire. Transp. Res. Part F Traffic Psychol. Behav. 2014, 26, 116–125. [Google Scholar] [CrossRef]
  34. Lovreglio, R.; Fonzone, A.; dell’Olio, L. A mixed logit model for predicting exit choice during building evacuations. Transp. Res. Part A Policy Pract. 2016, 92, 59–75. [Google Scholar] [CrossRef]
  35. Ronchi, E.; Nilsson, D.; Kojić, S.; Eriksson, J.; Lovreglio, R.; Modig, H.; Walter, A.L. A Virtual Reality Experiment on Flashing Lights at Emergency Exit Portals for Road Tunnel Evacuation. Fire Technol. 2016, 52, 623–647. [Google Scholar] [CrossRef]
  36. Qingshan, Y.; Lin, M. Human-Activity-Geographical-Environment Relationship, Its System and Its Regional System. Econ. Geogr. 2001, 5, 4. [Google Scholar]
  37. Jianhua, G. On Thought and Methodology of Virtual Geographic Experiment. J. Geomat. Sci. Technol. 2013, 30, 399–408. [Google Scholar] [CrossRef]
  38. Williams, D. The mapping principle, and a research framework for virual worlds. Commun. Theory 2010, 20, 451–470. [Google Scholar] [CrossRef]
  39. Moussaïd, M.; Perozo, N.; Garnier, S.; Helbing, D.; Theraulaz, G. The walking behaviour of pedestrian social groups and its impact on crowd dynamics. PLoS ONE 2010, 5, e10047. [Google Scholar] [CrossRef] [PubMed]
  40. McMahan, R.; Kopper, R.; Bowman, D. Principles for Designing Effective 3D Interaction Techniques. In Handbook of Virtual Environments; Human Factors and Ergonomics; CRC Press: Boca Raton, FL, USA, 2014; pp. 285–311. ISBN 978-1-4665-1184-2. [Google Scholar]
  41. Wikipedia. Dead Reckoning—Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/wiki/Dead_reckoning (accessed on 5 February 2018).
  42. Murphy, C. Believable Dead Reckoning for Networked Games. In Game Engine Gems 2; A K Peters/CRC Press: Boca Raton, FL, USA, 2011; pp. 307–328. ISBN 978-1-56881-437-7. [Google Scholar]
  43. Wikipedia. Kalman Filter—Wikipedia, The Free Encyclopedia. Available online: https://en.wikipedia.org/wiki/Kalman_filter (accessed on 5 February 2018).
  44. Pantel, L.; Wolf, L.C. On the suitability of dead reckoning schemes for games. In Proceedings of the 1st Workshop on Network and System Support for Games, Braunschweig, Germany, 16–17 April 2002; pp. 79–84. [Google Scholar]
  45. Shi, W.; Corriveau, J.P.; Agar, J. Dead reckoning using play patterns in a simple 2D multiplayer online game. Int. J. Comput. Games Technol. 2014, 2014. [Google Scholar] [CrossRef]
  46. Thrash, T.; Kapadia, M.; Moussaid, M.; Wilhelm, C.; Helbing, D.; Sumner, R.W.; Hölscher, C. Evaluation of Control Interfaces for Desktop Virtual Environments. Presence Teleoperators Virtual Environ. 2015, 24, 322–334. [Google Scholar] [CrossRef]
Figure 1. Heterogeneous distributed virtual geographic environment (HDVGE) conceptual framework.
Figure 1. Heterogeneous distributed virtual geographic environment (HDVGE) conceptual framework.
Ijgi 07 00054 g001
Figure 2. Architecture of HDVGE.
Figure 2. Architecture of HDVGE.
Ijgi 07 00054 g002
Figure 3. Flow chart of the abstract interaction layer for heterogeneous clients.
Figure 3. Flow chart of the abstract interaction layer for heterogeneous clients.
Ijgi 07 00054 g003
Figure 4. Flow chart of protocol-based interactions for heterogeneous clients.
Figure 4. Flow chart of protocol-based interactions for heterogeneous clients.
Ijgi 07 00054 g004
Figure 5. State synchronization process between clients.
Figure 5. State synchronization process between clients.
Ijgi 07 00054 g005
Figure 6. Prototype system overview: (a) subway scene with heavy smoke and dim lighting; (b) first person view with GUI; (c) three skeletal animations of an avatar; (d) all-in-one virtual reality (VR) client; (e) personal computer (PC) client.
Figure 6. Prototype system overview: (a) subway scene with heavy smoke and dim lighting; (b) first person view with GUI; (c) three skeletal animations of an avatar; (d) all-in-one virtual reality (VR) client; (e) personal computer (PC) client.
Ijgi 07 00054 g006
Figure 7. Comparison between adjusted dead reckoning (ADR) and the Kalman filter (KF): (a) accuracy of the two algorithms and (b) real prediction results.
Figure 7. Comparison between adjusted dead reckoning (ADR) and the Kalman filter (KF): (a) accuracy of the two algorithms and (b) real prediction results.
Ijgi 07 00054 g007
Figure 8. Overview of the performance testing process.
Figure 8. Overview of the performance testing process.
Ijgi 07 00054 g008
Figure 9. Server resource consumption with different numbers of concurrent users and packet lags: (a) CPU percentage; (b) memory; (c) network (PL stands for packet lag. S refers to sent traffic, while R refers to received traffic. Abbreviations in other figures have the same meanings).
Figure 9. Server resource consumption with different numbers of concurrent users and packet lags: (a) CPU percentage; (b) memory; (c) network (PL stands for packet lag. S refers to sent traffic, while R refers to received traffic. Abbreviations in other figures have the same meanings).
Ijgi 07 00054 g009
Figure 10. PC resource consumption in different number of concurrent users and different packet lags: (a) CPU percentage; (b) memory; (c) network; (d) framerate per second; (e) network delay.
Figure 10. PC resource consumption in different number of concurrent users and different packet lags: (a) CPU percentage; (b) memory; (c) network; (d) framerate per second; (e) network delay.
Ijgi 07 00054 g010
Figure 11. All-in-one VR resource consumption for different numbers of concurrent users and packet lags: (a) CPU percentage; (b) memory; (c) network; (d) framerate per second; (e) network delay.
Figure 11. All-in-one VR resource consumption for different numbers of concurrent users and packet lags: (a) CPU percentage; (b) memory; (c) network; (d) framerate per second; (e) network delay.
Ijgi 07 00054 g011

Share and Cite

MDPI and ACS Style

Shen, S.; Gong, J.; Liang, J.; Li, W.; Zhang, D.; Huang, L.; Zhang, G. A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments. ISPRS Int. J. Geo-Inf. 2018, 7, 54. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7020054

AMA Style

Shen S, Gong J, Liang J, Li W, Zhang D, Huang L, Zhang G. A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments. ISPRS International Journal of Geo-Information. 2018; 7(2):54. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7020054

Chicago/Turabian Style

Shen, Shen, Jianhua Gong, Jianming Liang, Wenhang Li, Dong Zhang, Lin Huang, and Guoyong Zhang. 2018. "A Heterogeneous Distributed Virtual Geographic Environment—Potential Application in Spatiotemporal Behavior Experiments" ISPRS International Journal of Geo-Information 7, no. 2: 54. https://0-doi-org.brum.beds.ac.uk/10.3390/ijgi7020054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop