Next Article in Journal
Essential Oil Compositions and Antifungal Activity of Sunflower (Helianthus) Species Growing in North Alabama
Next Article in Special Issue
Comparison of Tracking Techniques on 360-Degree Videos
Previous Article in Journal
On Sharing an FIB Table in Named Data Networking
Previous Article in Special Issue
User Interactions for Augmented Reality Smart Glasses: A Comparative Evaluation of Visual Contexts and Interaction Gestures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study

1
Department of Computer Science and Software Engineering, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
2
Department of Chemistry, Xi’an Jiaotong-Liverpool University, Suzhou 215123, China
3
Irving K. Barber School of Arts and Sciences, The University of British Columbia Okanagan, Kelowna, BC V1V 1V7, Canada
*
Author to whom correspondence should be addressed.
Submission received: 27 June 2019 / Revised: 31 July 2019 / Accepted: 31 July 2019 / Published: 5 August 2019
(This article belongs to the Special Issue Augmented Reality: Current Trends, Challenges and Prospects)

Abstract

:
Advanced developments in handheld devices’ interactive 3D graphics capabilities, processing power, and cloud computing have provided great potential for handheld augmented reality (HAR) applications, which allow users to access digital information anytime, anywhere. Nevertheless, existing interaction methods are still confined to the touch display, device camera, and built-in sensors of these handheld devices, which suffer from obtrusive interactions with AR content. Wearable fabric-based interfaces promote subtle, natural, and eyes-free interactions which are needed when performing interactions in dynamic environments. Prior studies explored the possibilities of using fabric-based wearable interfaces for head-mounted AR display (HMD) devices. The interface metaphors of HMD AR devices are inadequate for handheld AR devices as a typical HAR application require users to use only one hand to perform interactions. In this paper, we aim to investigate the use of a fabric-based wearable device as an alternative interface option for performing interactions with HAR applications. We elicited user-preferred gestures which are socially acceptable and comfortable to use for HAR devices. We also derived an interaction vocabulary of the wrist and thumb-to-index touch gestures, and present broader design guidelines for fabric-based wearable interfaces for handheld augmented reality applications.

1. Introduction

Augmented reality (AR) overlays computer-generated visual information—such as images, videos, text information, and 3D virtual objects—onto the real-world [1]. Unlike virtual reality (VR), which immerses the users inside a computer-generated environment, AR allows the users to see the real world with virtual objects blended within the real environment and enables the users to interact the virtual content in real-time [2]. The increasing computational capabilities, powerful graphics processors, built-in cameras, variety of sensors, and power of cloud-supported information access have made AR possible using handheld mobile devices, such as smartphones and tablets [3]. The evolution of AR technology—from the heavy wearable head-mounted display (HMD) devices to smartphones and tablets—has enabled new and rich handheld mobile AR experiences for everyday users. In handheld augmented reality (HAR), users can see the real world overlaid with virtual information in real-time by using the camera on their mobile devices. Current mobile devices are small in size, lightweight, and portable enough to be carried wherever users go. With this ubiquitous availability, HAR allows us to develop and design innovative applications in navigation, education, gaming, tourism, interactive shopping, production, marketing, and others [3]. Thus, smartphones have been identified as an ideal platform for HAR experiences in various outdoor and indoor environments [4,5,6].
In order to interact with the virtual world using HAR displays, a user needs to position and orientate the device using one hand and manipulate the virtual 3D objects with the other hand. In general, the touchscreen is used as a primary interface to interact with AR content [7,8]. In addition, the various built-in sensors in the handheld devices—such as cameras, GPS, compass, accelerometers, and gyroscope—enable to precisely determine the position and orientation of the device in the real world (e.g., [8,9,10]). Furthermore, the device’s camera is used to naturally capture the user’s mid-air hand movements while holding the device [11,12].
Like in HMD AR, manipulations such as selecting and moving virtual 3D information are primary interactions in HAR devices [13]. The existing HAR interaction methods, such as touch input, offer promising solutions to manipulate virtual content (e.g., [14]). However, they still have substantial limitations. For instance, touch input is limited by the device’s physical boundary and usability suffers as on-screen content becomes occluded by finger (i.e., finger occlusions [15,16]). Also, 2D inputs on the touch surface do not directly support manipulating the six degrees of freedom of a virtual object in the augmented world [17]. Unlike touch-based input, device movement-based HAR interaction methods support 3D object manipulations (e.g., [8,9]). However, orienting the device to rotate the 3D virtual objects might force the users to lose sight of the manipulated virtual object in the display. A mid-air hand gesture input method supports more natural and intuitive interaction for HAR applications (e.g., [11,12]). Nevertheless, using the back-camera on the device to track the hand and finger movements is often not feasible, due to several factors such as (1) it may not be very accurate, (2) the device should be held at a certain distance from the user’s eye, and (3) the human arm has a very limited region of reach. Furthermore, interactions with handheld devices also have issues, like social acceptance, when performed in front of unfamiliar people [18].
With the recent developments in fabric-based technology and new types of electronic textiles and smart clothing [19], we are witnessing Mark Weiser’s dream—of computers that “weave themselves into the fabric of everyday life” [20] —has taken one step closer to reality. Integrated into users’ clothing, fabric sensors are able to unobtrusively detect user movements [21] to convert them into input methods to control modern devices (e.g., [22,23,24]). Fabric-based wearable devices are soft, natural, and flexible; remove the need for the user to hold or carry a device; can be worn without any discomfort; and offer more freedom of movements compared to the rigid sensors (e.g., accelerometer) (e.g., [25]). Recently, researchers have started undertaking research on fabric-based interaction methods and attempting to provide a more natural, unobtrusive end-user interaction experience (e.g., [24,26]).
As stated by Hansson et al., an interface is considered as a “natural interface”, when it builds upon the knowledge and experience that the user already possesses [27]. Humans are used to wear clothes throughout their lives. In general, they do not need special training to wear or use clothes or, in other words, to use a clothing-based wearable interface. Among the different body parts, users’ hands enable various poses particularly important and suitable for gestural interaction for the following reasons: (1) hands are always available, (2) support high proprioception, (3) require minimal movement, (4) produce distinct subtle gestures, and (5) offer socially acceptable gestures.
In this paper, we are particularly interested in exploring fabric-based interaction methods that allow a user to use combined touch and gesture input to perform HAR interactions. To this end, we explore the potential use of gestures performed via a hand-worn clothing-based artifact (similar to a glove). Input based on textiles is not new (e.g., [22,23,24,25]), but their use has been directed primarily at traditional mobile devices [24] and smartwatches [23]. To our best knowledge, there are no studies that explored the combined use of hand and wrist gestures performed via a hand-worn device to interact with HAR content. To fill this gap, our work investigates the use of clothing-based hand and wrist gestures for HAR content. Our paper makes the following contributions: (1) an investigation of the potential design space of clothing-based gestures using the hand and wrist for HAR applications; (2) identification of a set of user-defined gestures; and (3) a set of design guidelines for fabric-based devices that allow users to interact with HAR content.

2. Related Work

2.1. Participatory Elicitation Studies

An elicitation method involves potential users in the process of designing any new type of interface or interaction. This approach allows designers to explore the possibilities of finding gestures from the end-users’ perspective, which are often are not in the initial gesture set designed by the designers. User-preferred gestures are easier to remember than expert-designed gestures [28]. To elicit gestures from the users, an interface designer shows a result of an action for each task, known as referents [29]. The designer then asks each participant to imagine and perform an action that would cause each referent to occur. These user proposed actions are known as symbols [29]. The designer compares all of the symbols for a given referent from each participant and groups them based on their similarity. Wobbrock et al. [29] introduced a method to understand the agreement and other factors associated with user-elicited gestures. Their method has been widely used in other elicitation studies (e.g., [30,31,32]). Most recently, Vatavu and Wobbrock [33] have redefined the agreement measurement to understand the data more precisely. This refined agreement formula and approach is applied in the analysis of our results.
In terms of user-elicited gestures for augmented reality, Piumsomboon et al. [34] presented a study of user-defined hand gestures for head-mounted AR displays. Their participants elicited gestures using both hands. By contrast, HAR devices are not head-worn and are often operated using only one hand on the touchscreen. Thus, interface metaphors of HMD AR are not suitable for HAR displays. In terms of interactions based on single hand thumb-to-fingers, Chan et al. [35] presented a study of user-elicited single hand gestures using two or more fingers. They reported that both thumb and index fingers were highly involved in the interactions of their gesture set. Their study focused on the fingers and did not include the wrist, which can provide richer interaction possibilities (e.g., [36,37]).
The above studies do not use devices to elicit user-designed gestures. This approach enables users to propose any gestures regardless of their feasibility of implementation and the affordance of a device. Similarly, with their approach, the method of grouping the elicited gestures that look similar significantly influences the magnitude of user agreement rates [38] as this process is often dependent on the researcher’s goals for the specific domain for which gestures were elicited. Therefore, by following their method, interface designers are unable to recommend a suitable sensing technology to recognize their user-elicited gestures. To address these issues, most recently, Nanjappan et al. [39] co-designed a prototype with potential end-users and conducted an elicitation study for a fabric-based wearable prototype to devise a taxonomy of wrist and touch gestures. Their non-functional prototype allowed the users to physically feel the affordance of the interface to perform their preferred gestures that are implementable in the functional version of the device. We adopted and extended their approach to elicit both wrist and thumb-to-index touch gestures to perform interactions with HAR applications.

2.2. Hand-Worn Clothing-Based Interfaces

The human hand is primarily used to interact and manipulate real-world objects in our everyday life. Hence, it is not surprising that hand gestures, involving hands and fingers, are repeatedly studied in interactive systems (e.g., [25,26,40,41]). Glove-based interfaces—the most popular systems for capturing hand movements—started nearly thirty years ago, and are continuously attracting the interest of a growing number of researchers. Glove-based input devices remove the requirement of the user to carry or hold their device to perform interactions. They are implemented by directly mounting different types of sensing technologies on the hand, mostly on each finger. The development of fabric-based strain sensing technology allows strain sensors to be either sewed or attached to the hand glove to detect the hand or finger movements [25,41,42].
Notably, Miller et al. [42] proposed a glove system to enter inputs using users’ thumb with two approaches: (1) by tapping ring and little fingertips with the thumb or (2) by sliding the thumb over the entire palmar surfaces of the index and middle fingers. Their proposed glove consists of both conductive and non-conductive fabrics, supporting both discrete and continuous output. They used two different approaches: (1) the palmar sides of both index and middle fingers are completely covered with grid conductive threads woven in an over-and-under pattern and (2) pads of thumb, ring, and little fingers are covered in conductive fabric. Typically, their glove-based input acts like a set of switches which can be closed by making contact with the thumb. This approach allows the glove interface to detect tap inputs on any fingertip and swipe gestures are performed on the conductive grids on both index and middle fingers. While their approach sounds promising for single-handed eyes-free interactions, the authors reported, however, that their sensing technology provided inaccurate results. Peshock et al. [25] presented “Argot”, a wearable one-handed keyboard glove which allows users to type English letters, numbers, and symbols. Argot is made of multiple polyester or spandex blended stretch knit and textile patches which provide breathability and mobility. It employs conductive thread and textile conductive patches which improve user comfort and utilizes a simpler input language. The patches are installed on 15 different locations on finger pads (5 patches on each digit), fingernails (4 patches except thumb), and sides of the index, middle, and ring fingers (2 patches on each finger side). In addition, it provides both tactile and auditory feedback through magnetic connection. A layering system is developed to enclose the magnets within the conductive patches. All these materials are insulated using fusible stitchless bonding film. However, the authors reported that the users were always accidentally triggering the side buttons on the index, middle, and ring fingers while performing gesture inputs.
Thumb-to-finger interaction methods present a promising input mechanism that is suitable to perform subtle, natural, and socially acceptable gestures. In particular, prior studies on hand gestures [35,43] have found that both thumb and index fingers are preferred over the middle, ring, and pinky fingers. As explained by Wolf et al. [43] the thumb is opposable and has the ability to rotate and be able to touch other fingers on the same hand. Similarly, the muscles involved in moving each finger and the biomechanics make the index finger well suited for independent movement [35]. In particular, Yoon et al. [24] proposed “Timmi”, a finger-worn piezoresistive fabric-based prototype which supports thumb-to-finger multimodal input. Timmi comprises of elastic fabric (made of 80% Nylon and 20% Spandex) painted with conductive elastomers and conductive threads. Diluted conductive elastomer was used to transform a normal fabric into a piezoresistive fabric. It consists of pressure and strain sensors made of using two different shapes and the conductive threads are cross-stitched to control the fabric. The interface can be worn on a finger, especially the index finger; the Spandex provides exceptional elasticity. Timmi supports the following input methods: finger bend, touch pressure, and swipe gesture. This approach takes advantage of the digit-wise independence and successfully uses the thumb-to-index touch input in a natural eyes-free manner.
Similarly, wrist-worn interfaces are particularly suitable for performing natural, subtle, and eyes-free interactions that can be both continuous and discrete [44] and support high proprioception [45]. Particularly, Strohmeier et al. [46] proposed “WristFlicker”, which detects wrist movements using a set of conductive polymer cable as strain sensors embedded in wrist-worn clothing (e.g., a wrist warmer). They employed three sets of two counteracting strain sensors to precisely measure the flexion/extension, ulnar/radial deviation, and pronation/supination of the human wrist. WristFlicker supports both discrete and continuous input, and can be used in an eyes-free manner. Recently, Ferrone et al. [47] developed a wristband equipped with stretchable polymeric strain gauge sensors to detect wrist movements. The number of small filaments (0.7 mm diameter and 1 cm length) are distributed on the surface of the wristband at regular intervals to cover the muscles of interest. These filaments are made using a mixture of thermoplastic and nano-conductive particles. Their system presented high accuracy in sensing wrist flexion and extension. Our proposed prototype can be implemented by combining the two approaches mentioned above: thumb-to-index touch inputs and wrist gestures; as such, the results of this study could also be domain-independent and be useful for designers of applications beyond augmented reality applications.

2.3. Summary

HAR is becoming promising in many different fields, such as tourism, shopping, learning, and gaming [3]. In order to allow a more natural and intuitive way to interact with HAR content, researchers have suggested increasing interaction methods outside of the handheld mobile device itself. Advances in fabric sensing technology allow us to combine multiple interface modalities together. A prior study on fabric-based interfaces focused on implementing the fabric sensing technology to detect either finger (e.g., [24]) or wrist (e.g., [46]) movements, but not for HAR interactions. Thus, a thorough investigation to understand user preferences regarding a fabric type for an interface worn on the hand will benefit designers regarding the types of gestures to utilize in the design of HAR interfaces that lead to a natural and intuitive interactive experience.

3. Methodology

Our study explored the scenario where a fabric-based hand-worn device would be available to users to perform interactions with HAR devices. We followed the methodology proposed by [39] to identify user-preferred thumb-to-index touch and wrist gestures for HAR interactions using a hand-worn fabric-based prototype (see Figure 1a).
In order to minimize the effect of users’ previous knowledge acquired by the existing touch-based interfaces [48], we applied the production [48] technique by asking each participant to propose three different gestures for each HAR task. In a typical HAR usage scenario, only one hand is available for users to perform interactions. Therefore, we asked our participants, when given a HAR task, to perform gestures that they thought were suitable and natural for that particular HAR task while holding the device on their one hand.

Fabric-Based Hand-Worn Interface

Our proposed interface is a modified hand glove with a fingerless design made of Lycra and cotton. Three soft foam buttons—two on the sides of the index finger on proximal and middle phalanges and one on the palmar side of the index finger (see Figure 1b) —were fixed using fabric glue. These three buttons all allow touch (to toggle between states) and hold (to keep one state over time) gestures. The location of these soft buttons was determined based on prior studies [24,25] and on user preferences from our series of initial pilot studies.
Our co-designed fabric-based interface is based on the human hand movements, especially wrist [49] and finger [24] joint movements, and supports the use of both thumb-to-index touch and wrist gesture inputs (see Figure 1c–e). The wrist joint is a flexible joint and wrist movements can take place along different axes. Our proposed design supports both horizontal and vertical wrist movements. Flexion occurs when the palm bends downward, towards the wrist. Extension is the movement opposite to flexion. Ulnar and radial deviation is the rightward and leftward wrist movement and occurs when the palm is facing down. Furthermore, three soft foam buttons enable legacy-inspired touch and hold gestures.
The hand-worn fabric-based prototype used in our study supports the following wrist and thumb-to-index touch gestures: (1) flexion and extension, ulnar and radial deviation of wrist joint movements; (2) tap and hold gestures using thumb; and (3) combination of all these gestures. There are only two constraints while performing a combination of wrist and thumb-to-index touch gestures. First, the two touch gestures (tap and hold) cannot be performed together and should always be associated with any one of the wrist gestures; and, second, touch gestures must precede a wrist gesture. See Figure 1c–e for sample gestures supported in our prototype.

4. User Study

4.1. Participants

Thirty-three participants volunteered to participate in the study (10 female, average age 20.61, SD = 1.13). All were university students of different educational backgrounds (mathematics, engineering, computer science, accounting, and industrial design) and recruited using WeChat, a widely used social media platform. All participants reported owning a handheld mobile device (such as a smartphone or tablet). Of these, 87.9% participants (14 participants with HAR experience) mentioned that they have some knowledge of AR. Fourteen participants reported owning at least one wearable device and 29 participants (87.88%) expressed their interest in using wearable mid-air gestural interfaces to perform interactions with their handheld devices. None of our participants were lefthanded.

4.2. Experimental Setup

The interaction situation was defined above a table (160 cm × 80 cm) in a dedicated experimental laboratory space. All participants were seated in front of the table and a Nexus 5X mobile phone running Android SDK 8.1 was used as the HAR device. Two 4K cameras mounted on tripods were used to capture the entire experiment procedure at two different angles. A 55-inch 4K display was used to play the video of possible gestures (see Figure 2).

4.3. Handheld Augmented Reality Tasks

We wanted to identify the list of tasks related to commonly used HAR applications. To do that, we first looked at Piumsomboon et al.’s list of tasks list [34] for HMD AR interactions and adopted the tasks related to the 3D object manipulations. In addition, we further surveyed the most common operations used for HAR applications in different domains (such as IKEA’s Place app) and collected a list of user interface (navigation) tasks. Furthermore, in HAR, the camera is the main interaction medium and enables different kinds of HAR experience, such as creating AR-based video content and personalized emojis (notably Apple’s Animoji). Therefore, we added the tasks related to control the cameras. We also had to limit the tasks list of three categories after the pilot studies to reduce the duration of the experiment to retain the participants’ commitment during the elicitation study. Table 1 presents the final set of 27 tasks which are classified in three different categories: (1) User Interface, (2) Object Transformation, and (3) Camera.

4.4. Procedure

The whole elicitation process contained four phases for each participant. All 33 participants were video-recorded throughout the experiment, and extensive notes were taken. In addition to two questionnaires, one of the researchers observed each session and interpreted the gestures to the participants. The entire process lasted about 50 min for each participant.

4.4.1. Introduction

Participants were introduced to the experiment setup and a short video about HAR interactions was played for those who were not aware of this technology. A short online questionnaire was given to them to collect demographic and prior experience information. Each participant was given 5 min to familiarize themselves with our HAR app.

4.4.2. Pre-Elicitation

In this second phase, participants were informed of the purpose of the study and primed [48] with a short video to demonstrate the possible ways of using our prototype. Detailed use of the prototype was explained to all participants, such as the possible wrist and thumb-to-index touch gestures supported in our prototype, including tap and hold gestures in the two-minute-long video. The researcher clarified each participant’s questions at this stage, e.g., regarding the types of gestures (supported by our prototype) and using a similar button, but a different gesture for multiple tasks (for example, using button one to perform both tap and hold gestures). We informed all of our participants that using buttons was not compulsory and they were told to use them based on their preference. After this, participants were given a suitable fabric-based interface. We prepared three different sizes of fabric-based wrist interfaces of black color suitable for both right and left hands, and did not constrain participants regarding which hand they wanted to wear the interface on—i.e., they could choose to wear it on either hand. We informed participants that all 27 tasks would be introduced, one by one, via our HAR app, and asked them to perform three different gestures while holding the handheld device on one hand.

4.4.3. Elicitation

To elicit user-preferred gestures for HAR interactions, participants were asked to hold the AR device (i.e., smartphone) on their preferred hand. They were encouraged to rest their arms on the table while holding the device. The 27 HAR tasks were presented via our HAR app and also presented via a printed A4 paper for their reference. All 33 participants were asked to follow the think-aloud protocol [29] to think of three different gestures for each task. Each participant was given a minute to associate their choice of three gestures for each task and perform them one by one while holding the device on their hand (see Figure 3a,b). They were also instructed to pick their preferred gesture for each task, and also to rate their preferred gestures in terms of “social acceptance” and “comfortable to use” in dynamic environments. All 27 tasks were always presented in the same order for each participant. For each task, the experimenter wrote down the gesture code of the preferred gesture and the ratings of the preferred gesture using custom-built software. For a greater understanding of their thought process, participants were asked to say a few words about their preferred gestures for each task while not holding the phone.

4.4.4. Semi-Structured Interview

Finally, we had a small discussion interview with each of the participants about their experience with our fabric-based prototype, including their opinions and difficulties encountered while performing gestures for HAR tasks. Their verbal feedback was encouraged during the experiment and when using the wrist and thumb-to-index touch gestures for HAR interactions.

4.5. Measures

We employed the following measures to assess and understand users’ preferences and cognition for gestures produced with our fabric-based interface for HAR interactions: initially, we grouped and tallied the wrist and thumb-to-index touch gestures based on the predefined unique gesture codes, which produced a percentage score for each gesture. Then, we computed agreement rates (A-Rate) for each of the 27 tasks, coagreement rates (CR) between pair of tasks (e.g., scroll up/scroll down), and also between subject agreements (e.g., HAR experience) using the formulas proposed by Wobbrock et al. [29] and Vatavu et al. [33], respectively, and their agreement analysis application (AGATe: Agreement Analysis Toolkit). The following equation shows their agreement rate formula:
A R a t e ( r ) = | P | | P | 1 P i P ( | P i | | P | ) 2 1 | P | 1
where, “P is the set of all proposals for referent r, |P| the size of the set, and Pi subsets of identical proposals from P” [33]. Using this formula, we can understand how much agreement is shared between our participants for gestures produced with our fabric-based interface for HAR interactions. Finally, we asked participants to rate their preferred gesture in terms of social acceptance and how comfortable they were to use for each task, on a scale of 1–7.

5. Results

We collected a total of 2673 gestures (33 participants × 27 tasks × 3 gestures) for the 27 given HAR tasks from our 33 participants. In addition, our data collection included the video recording, transcripts, and verbal feedback for each participant. Our results include the agreement between our user’s wrist and thumb-to-index touch gesture proposals, a gesture taxonomy, user-preferred gesture set, subjective feedback, and qualitative observations.

5.1. Classification of Gestures

All elicited user-preferred gestures were organized into three categories supported by our proposed interface: (1) wrist gestures: flexion, extension, ulnar and radial deviations; (2) touch gestures: tap and hold; and (3) combination of touch and wrist gestures. These gestures were performed using the wrist and fingers. Touch gestures were performed using the thumb by either tapping or holding any of the three soft foam buttons on the index finger. Wrist gestures were performed by moving the wrist joints on two different axes. As stated earlier, one of the key constraints is that the touch gesture (tap or hold) must always precede a wrist gesture while performing a combination of thumb-to-index and wrist gestures.

5.2. Consensus between the Users

Table 2 shows the agreement rate for each of the 27 tasks. The participants’ agreement rate (A-Rate) ranged between 0.053 (lowest agreement) and 0.328 (medium agreement) with the mean A-Rate of 0.144. We applied the coagreement rate (CR) formula proposed by Wobbrock et al. [29] to understand the agreement shared between two tasks r1 and r2. For example, in most cases, users chose to perform opposite gestures for directional pairs which have a similar meaning, such as “Swipe left/Swipe right” and “Move closer/Move further”. In our results, both “Move closer” and “Move further” have equal agreement rates (A-Rate for Move closer = 0.138, and A-Rate for Move down = 0.138). The CR for “Move closer” and “Move further” was 0.095. This suggests that the opposite gestures were used to perform these two tasks.

5.3. Effects of Users’ Prior Experience on Agreement

In our study, 33 participants were asked to perform gestures for HAR applications. We found that there was more agreement for users who have no prior experience with HAR for the tasks “Long press on the target“ (0.343 vs. 0.124, Vb(2,N=33) = 370.806, p = 0.003), “Select the target” (0.295 vs. 0.144, Vb(2,N=33) = 177.859, p = 0.020), and “Uniform scale up” (0.171 vs. 0.052, Vb(2,N=33) = 110.071, p = 0.052), while users with HAR experience achieved higher agreement for the tasks “Move right” (0.301 vs. 0.133, Vb(2,N=33) = 217.095, p = 0.012), “Move left” (0.255 vs. 0.152, Vb(2,N=33) = 81.504, p = 0.088), and “Scroll down” (0.229 vs. 0.124, Vb(2,N=33) = 85.409, p = 0.082); see Table 2. Similarly, there was more agreement for participants who do not use a wearable device for the tasks “End recording” (0.275 vs. 0.105, Vb(2,N=33) = 202.604, p = 0.019) and “Select next section” (0.392 vs. 0.2645, Vb(2,N=33) = 18.881, p = 0.057) than those who use at least one wearable device. To further understand these differences, we computed between-group coagreement rates for each task. For example, coagreement for the task “Long press on the target” was CRb = 0.207, showing that only 20.7% of all pairs of users across the two groups were in agreement regarding how to press the selected target in a HAR application, i.e., by holding button three. The reason why the other participants disagreed was that while the users with prior HAR experience preferred to press on the selected target by holding button three (equivalent of performing hold gesture on the smartphone), the users with no prior HAR experience elicited more variations, such as by tapping button one and holding button two. All of the elicited gestures from users who had never used HAR applications before showed a clear influence from previous experience acquired from touchscreen devices—this difference was statistically significant (p = 0.003) for the “Long press on the target” and exposed the largest effect size (Vb = 370.806) among all other tasks. A similar effect was observed for the “Take a photo” task as well, but this time, from another perspective: although the agreement rates of the two groups were similar (0.190 vs. 0.210) and the difference was not significant Vb(2,N=18) = 3.096), the coagreement rate displayed different gesture preferences for two groups (CRb = 0.167).

5.4. Taxonomy of Wrist and Thumb-to-Index Touch Gestures

To further understand wrist and thumb-to-index touch gestures used to perform HAR interactions, we considered the following three dimensions in our analysis. We were inspired and adopted/modified these dimensions from previous studies [35,50,51], and grouped them by the specifics of both wrist and thumb-to-index touch gestures:
  • Complexity (see Figure 4) identifies both touch and wrist gesture as either (a) simple or (b) complex. We define simple gestures as gestures that are performed using only one gesture (e.g., wrist or touch gesture). For example, moving the wrist downwards toward the palm to perform downward flexion and/or using the thumb to press any of the soft foam buttons are identified as simple gestures. Gestures performed using two or more distinctive gestures are identified as complex gestures (e.g., pressing a soft foam button followed by moving the wrist downwards toward the palm). We adopted this dimension from [52].
  • Structure (see Figure 5) represents the relative importance of the wrist and touch gestures in the elicitation of HAR gestures, with seven categories: (a) wrist (b) touch (button one), (c) touch (button two), (d) touch (button three), (e) touch (button one) and wrist, (f) touch (button two) and wrist, and (g) touch (button three) and wrist. We modified this category from the taxonomy of Vatatu and Pentiu [51]. For example, for touch only category, the tap or hold gesture was performed using any one of the three buttons.
  • Action (see Figure 6) groups the gestures based on their actions rather than their semantic meaning with five categories: (a) scroll, (b) swipe, (c) tap, (d) hold, and (e) compound. We adopted and modified this classification from Chan et al. [35], who used these dimensions to define user-designed single-hand microgestures without any specific domains. For example, downward flexion and upward extension were grouped as scrolls, while leftward flexion and rightward extension were grouped as swipes.
Simple gestures were highly performed for both discrete (82.75%) and continuous tasks (49.78%). Interestingly, participants preferred to use simple touch gestures (51.05% vs. 31.70%) for discrete and simple wrist gestures (36.58% vs. 13.2%) for continuous tasks. Similarly, complex gestures (50.22% vs. 17.25%) were also preferred for the continuous tasks. These results were confirmed by a one-way ANOVA test. We found a significant effect for complexity (F(1,25) = 73.274, p = 0.000) as our participants preferred simple gestures to perform discrete HAR tasks.
Our participants preferred to use button one on the index finger to perform both simple and complex gestures for all 27 tasks. For example, for the discrete tasks, 27.51% (button two: 23.78%; button three: 17.01%) of gestures were performed using button one. Similarly, button one was involved in nearly 19.91% (button two and wrist: 17.97%; button three and wrist: 12.34%) of complex gestures for the continuous tasks. A one-way ANOVA test revealed a statistically significant effect for touch inputs using button one for discrete tasks (F(1,25) = 14.275, p = 0.001) and button one and wrist inputs for continuous tasks (F(1,25) = 23.773, p = 0.000).
Compounds (all 27 tasks) were the most common out of the five action types, mainly used to perform continuous tasks, particularly the object transformation tasks. Our participants mentioned two key reasons why they preferred Compounds for object transformation tasks: (1) wrist gestures were simple and easier to perform; and (2) buttons allowed adding additional functions to wrist gestures to associate similar gesture for directional pairs.
Taps (20 of 27 tasks) were the next most preferred action type. Notably, Taps were mainly used for the camera tasks. Participants mentioned that buttons are most suitable for camera tasks as (1) buttons allowed them to perform simple interactions and (2) did not cover the visual content on the screen, making them particularly convenient for taking photos or recording videos. Notably, Tap and Hold actions were used to perform state toggles “Select the target” and “Long press on the target”.
Participants associated wrist movements for the tasks which resemble symbolic actions. Scrolls were highly preferred for tasks which resemble vertical movements (e.g., Select next section and Select previous section) while Swipes were mostly associated for the tasks which involve in horizontal movements (e.g., Go to next target and Go to the previous target). A one-way ANOVA test revealed a statistically significant effect on the action (F(1,25) = 26.741, p = 0.000) and users preferred to use Tap actions for the discrete tasks in HAR applications, which confirms the strong influence of previous interaction experience acquired from the touchscreen devices.

5.5. Consensus Gesture Set for HAR Interactions

We isolated 891 preferred gestures (33 participants × 27 tasks) from the original 2673 gestures. 57 unique gestures were used to perform 891 preferred gestures for the 27 tasks. To create a user-preferred gesture set, we picked the gestures which achieved the highest consensus for each task. This led us to have 13 unique gestures performed for 27 tasks, which represented the 599/891 gestures or 67.23% of the user-preferred gestures (see Figure 7).
We asked our participants to rate their preferred gesture proposal with numbers from 1 (very poor fit) to 7 (very good fit) to denote their confidence in the usability of their preferred gestures in two categories. We compared the subjective rating for social acceptance and comfortable to use between the user-preferred gesture set and the discarded set. We found that the average scores for gestures that are socially acceptable for the given tasks were 6.019 (SD = 0.992) and 4.830 (SD = 1.603), and average scores for being comfortable to use with our proposed design were 6.362 (SD = 0.366) and 5.612 (SD = 0.844), respectively. Independent t-tests confirmed that the user-preferred set was rated significantly higher than the discarded set in all two factors of social acceptance (p = 0.023) and comfortable to use (p = 0.005). Therefore, gestures in the user-preferred set are more suitable for HAR interactions in dynamic outdoor environments than the discarded one in terms of social acceptance and comfortable to use.

5.6. Users’ Feedback

All participants were encouraged to share their opinions and suggestions during the semi-structured discussion interview. All participants felt comfortable wearing our proposed prototype to perform gestures for the 27 HAR tasks. Nineteen (4 females) participants expressed that our proposed interface is most suitable for candid interaction [53], especially in a public place. When asked about the positions of the buttons, all 33 participants unanimously agreed and were satisfied with the current locations of the three soft buttons; however, only three participants (P3, P25, P27) further recommended adding an additional button. In particular, two participants preferred to have the fourth button on the sides of the index finger between buttons one and two, while the other participant preferred to have it on the palmar side of the index finger. Only one user (P32) suggested that the buttons could have haptic feedback (e.g., vibration) while another user (P10) proposed an additional slider type of control on the fabric-based interface. All participants mentioned that the glove-based single-handed interface was convenient to control the camera as it does not block the display like in touchscreens. Five female participants particularly expressed that they would like to have a fancy and colorful interface.

6. Discussion

In this work, we adopted and extended a methodology proposed by [39] to uniquely identify usable gestures for HAR interactions. We defined all user-preferred possible wrist and thumb-to-index touch gestures using our prototype before the study began, and grouped the elicited gestures using the unique gesture code. All 33 participants utilized the large degrees of the wrist and thumb-to-index touch gestures supported by our design to elicit distinct gestures for each HAR task. We used our HAR app to introduce all 27 tasks (most of the tasks are dichotomous pairs) in the same order to all the participants. Though we set a one-minute time limit to think of three different gestures for each task, all participants thought of their choice of three gestures within 45 s. Therefore, as mentioned in the results section, our user-elicited gestures using hand-worn interface for a specific set of HAR tasks achieved a medium agreement, which is aligned with the previous elicitation study using non-functional fabric-based prototype [39].
All participants performed three gestures for each task while holding the mobile phone on their left hand. Thus, they had to keep looking at the touchscreen to understand the tasks and had to associate their preferred gesture for each task. A significant number of gestures (5 simple touch gestures; and 4 touch and wrist gestures) from the user-preferred set were touchscreen-inspired tap and hold gestures. Nevertheless, our participants were not influenced by the legacy bias as they preferred two distinct gestures for state toggles. This finding contrasts with a prior elicitation study [35], where they did not use any prototype. Thus, they were often forced to use the same gesture for state toggles, whereas using our method, participants were able to perform two different gestures for state toggles.

6.1. Design Recommendations for Fabric-Based Interface for HAR Interactions

We discuss, in this section, some of our participants’ preferred gestures in more detail as we propose design guidelines for using wearable smart textiles for HAR interactions and recommend the following set of suggestions to further investigate the use of fabric-based interfaces for HAR interactions.

6.1.1. Simple Gestures Were Preferred for HAR Interactions

Of all user-preferred gestures, 82.75% (discrete) and 49.78% (continuous) were “simple” gestures, i.e., gestures performed using only one action, either a touch or wrist gesture, as mentioned in our gesture taxonomy (see Figure 7a–i). Participants reported that simple gestures were convenient to perform while complementing the primary task. However, they preferred to use complex gestures for 3D object manipulation tasks (see Figure 7j–m). Also, we found that complex gestures were less preferred for discrete (17.259%) than continuous tasks (50.22%), showing a clear preference for simple gestures for discrete tasks. These findings align with the prior elicitation study using fabric-based interface to perform interactions with in-vehicle systems [39].

6.1.2. Utilize Familiar Touch-Based Input Methods for Discrete HAR Tasks

We found that 51.05% of the gestures proposed for the discrete tasks were touch (37.53% tap and 13.52% hold) gestures. The literature has documented the effect of users’ prior experience [48]. Interestingly, subjects preferred to use two different variations of touch gestures for discrete tasks, as adopted from their prior experience [31]. They favored using touch gestures to toggle between states as they wanted to perform quick actions in a fairly short time. For example, tap gesture was preferred for switching between photo and video mode (see Figure 7a) while hold gesture was preferred to long-press on the target (see Figure 7d), particularly influenced by the semantic expression of the task. As reported in the prior elicitation studies (e.g., [35,54]), most of our users preferred to use identical touch gestures for state toggles and this suggests that for HAR interactions, gesture designers need to consider the existing metaphors related to the nature of the tasks.

6.1.3. Design Wrist Gestures for Continuous HAR Tasks

Of all proposed gestures, 36.58% (continuous) and 31.70% (discrete) were simple wrist gestures. Participants preferred to use wrist gestures for continuous tasks even though touch gestures were available: 13.20% (tap: 1.95%; hold: 11.26%) of touch gestures were preferred for continuous tasks. We found that participants used the flexion/extension in 86.89% of their simple wrist gestures and in 89.21% for complex gestures. Participants used in-air gestures to manipulate imaginary controls to associate wrist gestures for the continuous tasks, e.g., pushing a virtual object down (Figure 7f) or away in mid-air (Figure 7g), and sliding an imaginary control on both sides (Figure 7h,i). This association of wrist gestures with imaginary objects produced more dichotomous gestures, i.e., similar gestures but with opposite directions, as users tended to pair related tasks (Figure 7f–i). We recommend associate wrist gestures to create an instinctive mapping between movement and action, e.g., performing downward flexion replicates a physical action of moving an object down with control.

6.1.4. Favor Stretchable Fabric with Fingerless Design

We designed a fingerless hand glove (made of cotton and Lycra) as a physical interface. Three soft foam buttons were positioned on the index finger between metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints. The custom-made fingerless design allowed users to have full control over their thumb to comfortably rotate around to perform touch (Tap and Hold) gestures on the index finger. We endorse that for fabric-based interfaces to be practical and usable, they need to be thin, lightweight, and exceptionally stretchy, with increased elasticity while enhancing comfort and breathability.

6.1.5. Consider Resistive Sensing Technique to Capture Both the Wrist and Touch Inputs

Prior studies on fabric-based interfaces demonstrated the advantages of deploying resistive sensing technology to detect unobtrusively both wrist gestures [46] and thumb-to-index inputs [24]. They used fabric-based strain sensors to capture wrist movements and pressure sensors to detect legacy-inspired touch inputs. Our proposed design supports both wrist and thumb-to-touch input methods. To successfully recognize the gestures from our user-defined set, we recommend using two opposing fabric-based piezoresistive strain sensors [55] on the wrist and pressure sensors [56] on the index finger. The strain sensors can be sewn while the pressure sensors can be glued to the hand glove to allow detecting both wrist and thumb-to-index touch inputs.

6.1.6. Design Fabric-Based Interface That Foster Socially Acceptable Gestures

The aim of our study is to design an unobtrusive fabric-based interface for HAR applications so that end-users are willing to use this interface in public settings. Usability is defined by Bevan as “quality in use” [57]. However, a user’s readiness to use ubiquitous devices, particularly wearable devices, is not only limited to the quality of the device. In addition to the default usability aspect, the decision to use a wearable device, particularly a fabric-based wearable device, is based on numerous factors, mainly (1) wearable comfort and (2) social acceptability. Knight et al. [58] reported that factors both internal and external to the wearers significantly influence how comfortable they feel about wearing and using the device; as such, they influence their intention to use it. Similarly, Rice and Brewster [59] reported that social acceptability is also influenced by various factors, such as the device’s appearance, social status, and cultural conventions associated with it. Our proposed design allows both the already familiar touch and mid-air gestural inputs in a subtle unobtrusive posture, and results from the users’ subjective feedback show that the gestures in the user-preferred set are more suitable for HAR interactions in dynamic outdoor environments in terms of social acceptance and how comfortable they are to use. Furthermore, fabric-based wearable interfaces are useful for the places that are extremely cold (e.g., Siberia) where touch interactions on HAR devices require users to remove their gloves.

6.2. Limitations

We conducted our study inside a room and participants could rest their hands while holding the AR device on the table to produce the gestures. It will be interesting to investigate if users would produce the same interactions in other scenarios, like standing or even walking. We used a non-functional fabric-based fingerless glove-based prototype in our study. Similarly, we excluded pronation/supination of the forearm in our design because our participants did not prefer these movements and mentioned difficulties associating them with the given 27 HAR tasks during the series of pilot studies. Despite the absence of interactive functionalities and a set of distinct wrist movements, we were still able to understand users’ behavior and responses to it as an input interface for HAR interactions. None of our participants were lefthanded and, thus, our gesture set is only suitable for righthanded users.

7. Conclusions

In this study, we investigate the use of textile-based interface for HAR interactions. To further explore the design space of hand-worn fabric-based interfaces as an alternative interface for HAR applications, we designed a glove-based prototype which supports both wrist and thumb-to-index input methods. The fingerless design allowed users to comfortably rotate their thumb to perform touch gestures on the soft foam buttons integrated into the index finger. We recruited 33 potential end-users to participate in the elicitation study. By following the methodology to elicit the gestures using textile-based wearable interface, we were able to elicit gestures for HAR interactions. In addition to reflecting the user behavior, our user-preferred gesture set has properties that make them usable in dynamic HAR environments, such as social acceptance and how comfortable they are to use. In addition, we have presented a taxonomy of wrist and thumb-to-index tough gestures useful for performing interactions with HAR devices. By identifying gestures in our study, we have gained insight into the user-preferred gestures and have derived design guidelines for further exploration. Our results suggest using fabric-based interfaces to perform HAR interactions is simple, natural, and completely discourages the users from crossing the fingers across the screen, which could potentially cover the content on the screen.

Author Contributions

Conceptualization, V.N., H.-N.L. and K.K.-T.L.; Data curation, R.S. and H.X.; Formal analysis, V.N., R.S., H.-N.L., H.X. and K.H.; Funding acquisition, H.-N.L. and K.H.; Investigation, V.N., R.S., H.X. and H.-N.L.; Methodology, V.N., H.-N.L. and K.H.; Project administration, H.-N.L. and K.K.-T.L.; Software, R.S. and H.X.; Supervision, H.-N.L.; Validation, V.N. and H.-N.L.; Visualization, V.N. and R.S.; Writing – original draft, V.N., R.S., H.-N.L., H.X., K.K.-T.L. and K.H.

Funding

This research was funded by Xi’an Jiaotong-Liverpool University (XJTLU) Key Program Special Fund (#KSF-A-03) and XJTLU Research Development Fund (#RDF-13-02-19).

Acknowledgments

We thank all the volunteers who participated in the experiment for their time. We also thank the reviewers for their comments and suggestions that have helped improve our paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Azuma, R.T. A Survey of Augmented Reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [Google Scholar] [CrossRef]
  2. Mekni, M.; Lemieux, A. Augmented Reality: Applications, Challenges and Future Trends. Appl. Comput. Sci. 2014, 205, 205–214. [Google Scholar]
  3. Chatzopoulos, D.; Bermejo, C.; Huang, Z.; Hui, P. Mobile Augmented Reality Survey: From Where We Are to Where We Go. IEEE Access 2017, 5, 6917–6950. [Google Scholar] [CrossRef]
  4. Zhou, F.; Duh, H.B.-L.; Billinghurst, M. Trends in augmented reality tracking, interaction and display: A review of ten years of ISMAR. In Proceedings of the 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, Cambridge, UK, 15–18 September 2008; pp. 193–202. [Google Scholar]
  5. Kim, K.; Billinghurst, M.; Bruder, G.; Duh, H.B.L.; Welch, G.F. Revisiting trends in augmented reality research: A review of the 2nd Decade of ISMAR (2008–2017). IEEE Trans. Vis. Comput. Graph. 2018, 24, 2947–2962. [Google Scholar] [CrossRef] [PubMed]
  6. Bekele, M.K.; Town, C.; Pierdicca, R.; Frontoni, E.; Malinverni, E.V.A.S. A Survey of Augmented, Virtual, and Mixed Reality for Cultural Heritage. ACM J. Comput. Cult. Herit. 2018, 11, 7. [Google Scholar] [CrossRef]
  7. Polvi, J.; Taketomi, T.; Yamamoto, G.; Dey, A.; Sandor, C.; Kato, H. SlidAR: A 3D positioning method for SLAM-based handheld augmented reality. Comput. Graph. 2016, 55, 33–43. [Google Scholar] [CrossRef] [Green Version]
  8. Grandi, J.G.; Debarba, H.G.; Bemdt, I.; Nedel, L.; MacIel, A. Design and Assessment of a Collaborative 3D Interaction Technique for Handheld Augmented Reality. In Proceedings of the 25th IEEE Conference Virtual Reality 3D User Interfaces, VR 2018, Reutlingen, Germany, 18–22 March 2018; pp. 49–56. [Google Scholar]
  9. Mossel, A.; Venditti, B.; Kaufmann, H. 3DTouch and HOMER-S: Intuitive Manipulation Techniques for One-Handed Handheld Augmented Reality. In Proceedings of the Virtual Reality International Conference on Laval Virtual—VRIC ’13, Laval, France, 20–22 March 2013; p. 1. [Google Scholar]
  10. Samini, A.; Palmerius, K.L. A study on improving close and distant device movement pose manipulation for hand-held augmented reality. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology—VRST ’16, Munich, Germany, 2–4 November 2016; pp. 121–128. [Google Scholar]
  11. Bai, H.; Lee, G.A.; Ramakrishnan, M.; Billinghurst, M. 3D gesture interaction for handheld augmented reality. In Proceedings of the SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications on—SA ’14, Shenzhen, China, 3–6 December 2014; pp. 1–6. [Google Scholar]
  12. Wacker, P.; Nowak, O.; Voelker, S.; Borchers, J. ARPen: Mid-Air Object Manipulation Techniques for a Bimanual AR System with Pen & Smartphone. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19, Glasgow, UK, 4–9 May 2019; pp. 1–12. [Google Scholar]
  13. Goh, E.S.; Sunar, M.S.; Ismail, A.W. 3D object manipulation techniques in handheld mobile augmented reality interface: A review. IEEE Access 2019, 7, 40581–40601. [Google Scholar] [CrossRef]
  14. Yin, J.; Fu, C.; Zhang, X.; Liu, T. Precise Target Selection Techniques in Handheld Augmented Reality Interfaces. IEEE Access 2019, 7, 17663–17674. [Google Scholar] [CrossRef]
  15. Liang, H.N.; Williams, C.; Semegen, M.; Stuerzlinger, W.; Irani, P. An investigation of suitable interactions for 3D manipulation of distant objects through a mobile device. Int. J. Innov. Comput. Inf. Control. 2013, 9, 4737–4752. [Google Scholar]
  16. Liang, H.-N.; Williams, C.; Semegen, M.; Stuerzlinger, W.; Irani, P. User-defined surface + motion gestures for 3d manipulation of objects at a distance through a mobile device. In Proceedings of the 10th Asia Pacific Conference on Computer Human Interaction—APCHI ’12, Shimane, Japan, 28–31 August 2012; p. 299. [Google Scholar]
  17. Yusof, C.S.; Bai, H.; Billinghurst, M.; Sunar, M.S. A review of 3D gesture interaction for handheld augmented reality. J. Teknol. 2016, 78, 15–20. [Google Scholar] [CrossRef]
  18. Malhotra, Y.; Galletta, D.F. Extending the technology acceptance model to account for social influence: Theoretical bases and empirical validation. In Proceedings of the 32nd Annual Hawaii International Conference on Systems Sciences. 1999. HICSS-32. Abstracts and CD-ROM of Full Papers, Maui, HI, USA, 5–8 January 1999; p. 14. [Google Scholar]
  19. Stoppa, M.; Chiolerio, A. Wearable electronics and smart textiles: A critical review. Sensors 2014, 14, 11957–11992. [Google Scholar] [CrossRef]
  20. Weiser, M. The Computer for the 21st Century. Sci. Am. 1991, 265, 94–104. [Google Scholar] [CrossRef]
  21. Nanjappan, V.; Liang, H.-N.; Lau, K.; Choi, J.; Kim, K.K. Clothing-based wearable sensors for unobtrusive interactions with mobile devices. In Proceedings of the 2017 International SoC Design Conference (ISOCC), Seoul, Korea, 5–8 November 2017; pp. 139–140. [Google Scholar]
  22. Parzer, P.; Sharma, A.; Vogl, A.; Steimle, J.; Olwal, A.; Haller, M. SmartSleeve: Real-time Sensing of Surface and Deformation Gestures on Flexible, Interactive Textiles, using a Hybrid Gesture Detection Pipeline. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology—UIST ’17, Quebec City, QC, Canada, 22–25 October 2017; pp. 565–577. [Google Scholar]
  23. Schneegass, S.; Voit, A. GestureSleeve: Using Touch Sensitive Fabrics for Gestural Input on the Forearm for Controlling Smartwatches. In Proceedings of the 2016 ACM International Symposium on Wearable Computers—ISWC ’16, Heidelberg, Germany, 12–16 September 2016; pp. 108–115. [Google Scholar]
  24. Yoon, S.H.; Huo, K.; Nguyen, V.P.; Ramani, K. TIMMi: Finger-worn Textile Input Device with Multimodal Sensing in Mobile Interaction. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction—TEI ’15, Stanford, CA, USA, 15–19 January 2015; pp. 269–272. [Google Scholar]
  25. Peshock, A.; Dunne, L.E.; Duvall, J. Argot: A wearable one-handed keyboard glove. In Proceedings of the 2014 ACM International Symposium on Wearable Computers Adjunct Program—ISWC ’14 Adjunct, Seattle, WA, USA, 13–17 September 2014; pp. 87–92. [Google Scholar]
  26. Hsieh, Y.-T.; Jylhä, A.; Orso, V.; Gamberini, L.; Jacucci, G. Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems—CHI ’16, San Jose, CA, USA, 7–12 May 2016; pp. 4203–4215. [Google Scholar]
  27. Hansson, P.; Wallberg, A.; Simsarian, K. Techniques for ‘Natural’ Interaction in Multi-User CAVE-Like Environments. In Proceedings of the Fifth European Conference on Computer Supported Cooperative Work—ECSCW ’14 Poster Description, Lancaster, UK, 7–11 September 1997. [Google Scholar]
  28. Nacenta, M.A.; Kamber, Y.; Qiang, Y.; Kristensson, P.O. Memorability of pre-designed and user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI ’13, Paris, France, 27 April–2 May 2013; p. 1099. [Google Scholar]
  29. Wobbrock, J.O.; Aung, H.H.; Rothrock, B.; Myers, B.A. Maximizing the guessability of symbolic input. In Proceedings of the CHI ’05 Extended Abstracts on Human Factors in Computing Systems–CHI ’05, Portland, OR, USA, 2–7 April 2005; pp. 1869–1872. [Google Scholar]
  30. Seyed, T.; Burns, C.; Sousa, M.C.; Maurer, F.; Tang, A. Eliciting Usable Gestures for Multi-Display Environments. In Proceedings of the International Conference on Interactive Tabletops and Surfaces (ITS’12), Cambridge, MA, USA, 11–14 November 2012; pp. 41–50. [Google Scholar]
  31. Wobbrock, J.O.; Morris, M.R.; Wilson, A.D. User-defined gestures for surface computing. In Proceedings of the 27th international conference on Human factors in computing systems—CHI 09, Boston, MA, USA, 4–9 April 2009; p. 1083. [Google Scholar]
  32. Nanjappan, V.; Liang, H.-N.; Lu, F.; Papangelis, K.; Yue, Y.; Man, K.L. User-elicited dual-hand interactions for manipulating 3D objects in virtual reality environments. Hum. Cent. Comput. Inf. Sci. 2018, 8, 31. [Google Scholar] [CrossRef]
  33. Vatavu, R.-D.; Wobbrock, J.O. Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems—CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 1325–1334. [Google Scholar]
  34. Piumsomboon, T.; Clark, A.; Billinghurst, M.; Cockburn, A. User-defined gestures for augmented reality. In Proceedings of the CHI ’13 Extended Abstracts on Human Factors in Computing Systems–CHI ’13, Paris, France, 27 April–2 May 2013; pp. 955–960. [Google Scholar]
  35. Chan, E.; Seyed, T.; Stuerzlinger, W.; Yang, X.-D.; Maurer, F. User Elicitation on Single-hand Microgestures. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 3403–3414. [Google Scholar]
  36. Gong, J.; Yang, X.-D.; Irani, P. WristWhirl: One-handed Continuous Smartwatch Input using Wrist Gestures. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology—UIST ’16, Tokyo, Japan, 16–19 October 2016; pp. 861–872. [Google Scholar]
  37. Rekimoto, J. GestureWrist and GesturePad: Unobtrusive wearable interaction devices. In Proceedings of the Fifth International Symposium on Wearable Computers, Zurich, Switzerland, 8–9 October 2001; pp. 21–27. [Google Scholar]
  38. Gheran, B.-F.; Vanderdonckt, J.; Vatavu, R.-D. Gestures for Smart Rings: Empirical Results, Insights, and Design Implications. In Proceedings of the 2018 Designing Interactive Systems Conference DIS ’18, Hong Kong, China, 09–13 June 2018; pp. 623–635. [Google Scholar]
  39. Nanjappan, V.; Shi, R.; Liang, H.-N.; Lau, K.K.-T.; Yue, Y.; Atkinson, K. Towards a Taxonomy for In-Vehicle Interactions Using Wearable Smart Textiles: Insights from a User-Elicitation Study. Multimodal Technol. Interact. 2019, 3, 33. [Google Scholar] [CrossRef]
  40. Bianchi, M.; Haschke, R.; Büscher, G.; Ciotti, S.; Carbonaro, N.; Tognetti, A. A Multi-Modal Sensing Glove for Human Manual-Interaction Studies. Electronics 2016, 5, 42. [Google Scholar] [CrossRef]
  41. Whitmire, E.; Jain, M.; Jain, D.; Nelson, G.; Karkar, R.; Patel, S.; Goel, M. DigiTouch: Reconfigurable Thumb-to-Finger Input and Text Entry on Head-mounted Displays. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Maui, HI, USA, 11–15 September 2017; pp. 1–21. [Google Scholar]
  42. Miller, S.; Smith, A.; Bahram, S.; Amant, R.S. A glove for tapping and discrete 1D/2D input. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces—IUI ’12, Lisbon, Portugal, 14–27 February 2012; p. 101. [Google Scholar]
  43. Wolf, K.; Naumann, A.; Rohs, M.; Müller, J. A taxonomy of microinteractions: Defining microgestures based on ergonomic and scenario-Dependent requirements. Lect. Notes Comput. Sci. 2011, 6946, 559–575. [Google Scholar]
  44. Cheung, V.; Eady, A.K.; Girouard, A. Exploring Eyes-free Interaction with Wrist-Worn Deformable Materials. In Proceedings of the Tenth International Conference on Tangible, Embedded, and Embodied Interaction—TEI ’17, Yokohama, Japan, 20–23 March 2017; pp. 521–528. [Google Scholar]
  45. Lopes, P.; Ion, A.; Mueller, W.; Hoffmann, D.; Jonell, P.; Baudisch, P. Proprioceptive Interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems—CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 939–948. [Google Scholar]
  46. Strohmeier, P.; Vertegaal, R.; Girouard, A. With a flick of the wrist: Stretch Sensors as Lightweight Input for Mobile Devices. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction—TEI ’12, Kingston, ON, Canada, 19–22 February 2012; p. 307. [Google Scholar]
  47. Ferrone, A.; Maita, F.; Maiolo, L.; Arquilla, M.; Castiello, A.; Pecora, A.; Jiang, X.; Menon, C.; Colace, L. Wearable band for hand gesture recognition based on strain sensors. In Proceedings of the 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics, Singapore, 26–29 June 2016; pp. 1319–1322. [Google Scholar]
  48. Morris, M.R.; Danielescu, A.; Drucker, S.; Fisher, D.; Lee, B.; Wobbrock, J.O. Reducing legacy bias in gesture elicitation studies. Interactions 2014, 21, 40–45. [Google Scholar] [CrossRef]
  49. Rahman, M.; Gustafson, S.; Irani, P.; Subramanian, S. Tilt Techniques: Investigating the Dexterity of Wrist-based Input. In Proceedings of the 27th international conference on Human factors in computing systems—CHI 09, Boston, MA, USA, 4–9 April 2009; p. 1943. [Google Scholar]
  50. Ruiz, J.; Vogel, D. Soft-Constraints to Reduce Legacy and Performance Bias to Elicit Whole-body Gestures with Low Arm Fatigue. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems—CHI ’15, Seoul, Korea, 18–23 April 2015; pp. 3347–3350. [Google Scholar]
  51. Vatavu, R.D.; Pentiuc, S.G. Multi-Level Representation of Gesture as Command for Human Computer Interaction. Comput. Inform. 2002, 27, 837–851. [Google Scholar]
  52. Ruiz, J.; Li, Y.; Lank, E. User-defined motion gestures for mobile interaction. In Proceedings of the 2011 annual conference on Human factors in computing systems—CHI ’11, Vancouver, BC, Canada, 7–12 May 2011; p. 197. [Google Scholar]
  53. Ens, B.; Grossman, T.; Anderson, F.; Matejka, J.; Fitzmaurice, G. Candid Interaction: Mobile and Wearable Computing Activities. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology—UIST ’15, Charlotte, NC, USA, 11–15 November 2015; pp. 467–476. [Google Scholar]
  54. Morris, M.R. Web on the wall: Insights from a Multimodal Interaction Elicitation Study. In Proceedings of the 2012 ACM international conference on Interactive tabletops and surfaces—ITS ’12, Cambridge, MA, USA, 11–14 November 2012; p. 95. [Google Scholar]
  55. Seyedin, S.; Zhang, P.; Naebe, M.; Qin, S.; Chen, J.; Wang, X.; Razal, J.M. Textile strain sensors: A review of the fabrication technologies, performance evaluation and applications. Mater. Horiz. 2019, 6, 219–249. [Google Scholar] [CrossRef]
  56. Tewari, A.; Gandla, S.; Bohm, S.; McNeill, C.R.; Gupta, D. Highly Exfoliated MWNT-rGO Ink-Wrapped Polyurethane Foam for Piezoresistive Pressure Sensor Applications. ACS Appl. Mater. Interfaces 2018, 10, 5185–5195. [Google Scholar] [CrossRef]
  57. Bevan, N. Usability is quality of use. Adv. Hum. Factors Ergon. 1995, 20, 349–354. [Google Scholar]
  58. Knight, J.F.; Baber, C. A Tool to Assess the Comfort of Wearable Computers. Hum. Factors J. Hum. Factors Ergon. Soc. 2005, 47, 77–91. [Google Scholar] [CrossRef]
  59. Rico, J.; Brewster, S. Usable gestures for mobile interfaces. In Proceedings of the 28th international conference on Human factors in computing systems—CHI ’10, Atlanta, GA, USA, 10–15 April 2010; p. 887. [Google Scholar]
Figure 1. (a) Fabric-based prototype used in our study for righthanded participants. Three soft foam buttons (two on the sides of the index finger on proximal and middle phalanges, and one on the palmar side of the index proximal phalange) were glued to a fingerless hand glove design. (b) Illustrates the finger joints on the index finger. (ce) Sample gestures supported by in-house developed prototype: (c) wrist only; (d) touch only; and (e) touch and wrist.
Figure 1. (a) Fabric-based prototype used in our study for righthanded participants. Three soft foam buttons (two on the sides of the index finger on proximal and middle phalanges, and one on the palmar side of the index proximal phalange) were glued to a fingerless hand glove design. (b) Illustrates the finger joints on the index finger. (ce) Sample gestures supported by in-house developed prototype: (c) wrist only; (d) touch only; and (e) touch and wrist.
Applsci 09 03177 g001
Figure 2. A participant is performing a gesture while wearing the fabric-based prototype on the right hand and holding the handheld augmented reality (HAR) device on the left hand. Two cameras were used the capture the entire process.
Figure 2. A participant is performing a gesture while wearing the fabric-based prototype on the right hand and holding the handheld augmented reality (HAR) device on the left hand. Two cameras were used the capture the entire process.
Applsci 09 03177 g002
Figure 3. Variety of gestures performed by our participants using one hand (right hand) while holding the device on the other hand: (a) wrist gestures and (b) thumb-to-index touch gestures.
Figure 3. Variety of gestures performed by our participants using one hand (right hand) while holding the device on the other hand: (a) wrist gestures and (b) thumb-to-index touch gestures.
Applsci 09 03177 g003
Figure 4. Frequency distribution of complexity of gestures in each category. Simple gestures were highly preferred for discrete tasks.
Figure 4. Frequency distribution of complexity of gestures in each category. Simple gestures were highly preferred for discrete tasks.
Applsci 09 03177 g004
Figure 5. Observed percentages of wrist and touch gestures for HAR interactions. Button one was highly preferred for both simple and complex gestures. Simple wrist gestures were preferred for continuous tasks and simple touch gestures for discrete tasks (shows a clear influence of prior experience).
Figure 5. Observed percentages of wrist and touch gestures for HAR interactions. Button one was highly preferred for both simple and complex gestures. Simple wrist gestures were preferred for continuous tasks and simple touch gestures for discrete tasks (shows a clear influence of prior experience).
Applsci 09 03177 g005
Figure 6. Frequency distribution of action types in the preferred gesture set for 27 tasks. Tap action was highly preferred for the camera tasks.
Figure 6. Frequency distribution of action types in the preferred gesture set for 27 tasks. Tap action was highly preferred for the camera tasks.
Applsci 09 03177 g006
Figure 7. User-preferred gesture set for HAR interactions. Simple touch gestures: (a) tap button one, (b) tap button two, (c) tap button three, (d) hold button two, (e) hold button three; Simple wrist gestures: (f) downward flexion, (g) upward extension, (h) leftward flexion, (i) rightward extension; Complex gestures: (j) hold button one and downward flexion, (k) hold button two and downward flexion, (l) hold button two and leftward flexion, (m) hold button two and rightward extension.
Figure 7. User-preferred gesture set for HAR interactions. Simple touch gestures: (a) tap button one, (b) tap button two, (c) tap button three, (d) hold button two, (e) hold button three; Simple wrist gestures: (f) downward flexion, (g) upward extension, (h) leftward flexion, (i) rightward extension; Complex gestures: (j) hold button one and downward flexion, (k) hold button two and downward flexion, (l) hold button two and leftward flexion, (m) hold button two and rightward extension.
Applsci 09 03177 g007
Table 1. The selected list of 27 HAR tasks used in our study. The tasks are classified into three different categories and presented via our HAR app.
Table 1. The selected list of 27 HAR tasks used in our study. The tasks are classified into three different categories and presented via our HAR app.
User Interface (Navigation)Object TransformationCamera
(T1) Select next section (T11) Move left(T21) Switch to photo
(T2) Select previous section(T12) Move right(T22) Switch to front camera
(T3) Go to next target(T13) Move closer(T23) Switch to rear camera
(T4) Go to previous target(T14) Move further(T24) Take a photo
(T5) Select the target(T15) Uniform scale up(T25) Switch to video
(T6) Long press on the target(T16) Uniform scale down(T26) Start video recording
(T7) Scroll up(T17) Yaw left(T27) Stop video recording
(T8) Scroll down(T18) Yaw right
(T9) Swipe left(T19) Pitch up
(T10) Swipe right(T20) Pitch down
Table 2. Agreement rate for 27 tasks in two categories: discrete and continuous with effects of users’ prior experience (experience with HAR application and using a wearable device). The highest agreement rates are highlighted in dark gray, while lowest are shown in light gray. Significance is highlighted in green.
Table 2. Agreement rate for 27 tasks in two categories: discrete and continuous with effects of users’ prior experience (experience with HAR application and using a wearable device). The highest agreement rates are highlighted in dark gray, while lowest are shown in light gray. Significance is highlighted in green.
CategoryTasksARHAR ExperiencepWearable Device p
WithWithoutUsedNever
Discrete(T1) Select next section 0.3280.3070.3710.2570.3920.2640.057
(T2) Select previous section0.1670.150.1710.70.1870.1320.354
(T3) Go to next target0.110.0780.1140.510.1290.0770.39
(T4) Go to previous target0.110.0850.1140.5880.1230.0770.44
(T5) Select the target0.2050.1440.2950.020.2280.1430.177
(T6) Long press on the target0.210.1240.3430.0030.2340.1540.201
(T21) Switch to photo0.0530.0330.0570.6560.0410.0440.959
(T22) Switch to front camera0.1060.0920.0950.8450.1290.0880.494
(T23) Switch to rear camera0.1060.0650.1140.3750.1230.0880.553
(T24) Take a photo 0.1820.190.210.710.1750.1540.708
(T25) Switch to video0.0640.0590.0380.7010.0580.0440.801
(T26) Start recording0.1330.1180.210.120.1050.1540.417
(T27) End recording0.1720.170.2570.1780.1050.2750.019
Continuous(T7) Scroll up0.110.1310.0670.2580.1050.1320.65
(T8) Scroll down0.1780.2290.1240.0820.1930.1650.63
(T9) Swipe left0.1890.190.1620.6160.240.1540.175
(T10) Swipe right0.1840.190.1520.4990.240.1430.133
(T11) Move left0.220.2550.1520.0880.1810.2420.316
(T12) Move right0.2270.3010.1330.0120.1930.2310.52
(T13) Move closer0.1380.1240.1520.610.1580.0880.255
(T14) Move further0.1380.1440.1050.4750.1350.1210.813
(T15) Uniform scale up0.10.0520.1710.0520.1230.0660.344
(T16) Uniform scale down0.1230.0850.1330.3820.1170.110.904
(T17) Yaw left 0.1020.0720.1240.350.1290.0770.39
(T18) Yaw right 0.1020.0720.1240.350.1290.0770.39
(T19) Pitch up0.0530.0520.0950.4330.0580.0880.619
(T20) Pitch down0.0980.0920.0860.9850.0990.0880.841

Share and Cite

MDPI and ACS Style

Nanjappan, V.; Shi, R.; Liang, H.-N.; Xiao, H.; Lau, K.K.-T.; Hasan, K. Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study. Appl. Sci. 2019, 9, 3177. https://0-doi-org.brum.beds.ac.uk/10.3390/app9153177

AMA Style

Nanjappan V, Shi R, Liang H-N, Xiao H, Lau KK-T, Hasan K. Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study. Applied Sciences. 2019; 9(15):3177. https://0-doi-org.brum.beds.ac.uk/10.3390/app9153177

Chicago/Turabian Style

Nanjappan, Vijayakumar, Rongkai Shi, Hai-Ning Liang, Haoru Xiao, Kim King-Tong Lau, and Khalad Hasan. 2019. "Design of Interactions for Handheld Augmented Reality Devices Using Wearable Smart Textiles: Findings from a User Elicitation Study" Applied Sciences 9, no. 15: 3177. https://0-doi-org.brum.beds.ac.uk/10.3390/app9153177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop