Next Article in Journal
IR-UWB Radar-Based Robust Heart Rate Detection Using a Deep Learning Technique Intended for Vehicular Applications
Previous Article in Journal
Investigation of UAV Detection by Different Solid-State Marine Radars
Previous Article in Special Issue
Semantic Depth Data Transmission Reduction Techniques Based on Interpolated 3D Plane Reconstruction for Light-Weighted LiDAR Signal Processing Platform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiLo: ADL Localization with Conventional Luminaries and Ambient Light Sensor

1
School of Information Science and Engineering, Shenyang Ligong University, Shenyang 110180, China
2
Walmart Inc., Bentonville, AR 72712, USA
3
Department of Computer Science, Iowa State University, Ames, IA 50011, USA
*
Author to whom correspondence should be addressed.
Submission received: 2 July 2022 / Revised: 3 August 2022 / Accepted: 9 August 2022 / Published: 11 August 2022
(This article belongs to the Special Issue Applications of Light Sensing Technology)

Abstract

:
Indoor localization is a key factor for activities of daily living (ADLs)-related services. Many studies invest effort and money on high-cost infrastructure with modified devices. In this paper, an indoor localization system (LiLo) that utilizes ambient light sensor and orientation information on smartphones to recognize ADLs is proposed. Indoor ADLs are recognized by analyzing the data combination of visible light based localization, orientation and time. In the cold start period, LiLo estimates the location based on the computed luminance field map and the frequent orientation, validating the location result by the angle of arrival information. Then, LiLo produces the locations with a machine learning classifier. Compared with previous works, LiLo leaves out the laborious device configuration setup and data collection during the off-line phase. Another advantage is that LiLo utilizes a conventional luminaire and a standard smartphone, without extra infrastructure spreading in rooms. Therefore, every resident with a smartphone can benefit from this technology. An experimental study using data collected from smartphones shows that LiLo is able to achieve high localization accuracy at a low cost.

1. Introduction

Indoor localization is a research area that has been gaining increasing attention recently. A wide range of location-based services (LBS) are attractive, such as navigating users inside shopping malls, pushing precise advertisements to users, personalized recommendation, proximity notification, etc.
Activities of daily living (ADLs) are routine activities that people tend to do every day. For nursing-home care or in-home care evaluations and services, the ability to perform ADLs is a key factor. The ADLs are highly correlated with the indoor locations. To provide LBS in the ADL area, high location accuracy is essential.
Accurate indoor positioning is difficult to achieve by global positioning system (GPS) because devices usually disconnect from highly attenuated GPS signals at all [1]. For that reason, some assistant techniques are proposed. The radio frequency (RF)-based technique is one of the possible alternatives. Many RF-based studies have attempted to provide precise location information during the past 20 years, including wireless local area network, radio-frequency identification, cellular, ultrasound, Bluetooth, and so forth [2,3]. There are some other localization systems relying on signals such as magnetism [4], lower-frequency FM broadcast radio signals for robust indoor fingerprinting [5]. Furthermore, simultaneous Localization And Mapping (SLAM) technique is one of the approaches to generate map and navigate a robot [6]. These methods deliver positioning accuracies from tens of centimeters to several meters. High accuracy is required for ADL recognition applications that expect furniture-level differentiation within a small room.
The wireless signal is subject to interference from many aspects, such as walls and floors blocking, any other wireless appliance, unexpected poor signal strength, etc. Wi-Fi Received Signal Strength Indicator (RSSI) varies significantly over time and is susceptible to human presence, multipath, and fading, resulting in erratic locations. Apart from that, they are also subject to electromagnetic (EM) interference. Hence, relatively poor accuracy of indoor positioning is achievable by most RF-based techniques. One educational robotics project [7] provides a Wi-Fi positioning system to identify rooms for a robot built from Android and chip computer Arduino in a testbed. Under a set of commodity Wi-Fi infrastructure, Refs. [8,9] analyze the multipath wireless channel to gain the super-resolutions within tens of centimeters. The accurate estimation does satisfy the room-level positioning for ADL recognition. Yet, for most average houses and apartments, it is not common to equip a high dense of APs. Instead of the prevalent RSSI-fingerprinting dataset, a new structure for data representation was proposed in [10]. The novel dataset with four categorical features and six numeric ones, a hierarchical framework for the fusion of floor and a 2D coordinate estimation was proposed as well. It then achieved a considerable amelioration in floor detection and horizontal coordinate estimation. Thereof, one inevitable challenge of Wi-Fi and other RF-based indoor positioning systems (IPS) is high-cost pervasive infrastructures, needing setup-related infrastructure [11,12]. Energy is a crucial consideration for mobile devices; nevertheless, both obtaining a GPS signal and scanning for a Wi-Fi signal would drain a significant amount of energy.
Light is omnipresent, which can be leveraged more than illumination. Unlike RF-based signals, light signals are line-of-sight and ever-stable in nature. In academic society these years, visible light communications (VLC)-based technology is gaining more attraction. Furthermore, visible light-based approaches have shown some promise for indoor visible-light positioning (VLP) [13,14,15,16].
This work first discovers the intrinsic logic of indoor ADL localization, and presents a light-based indoor localization system (LiLo) that merely employs a single-point smartphone. It is trying to keep the user device as simple as possible with less battery consumption and higher accuracy.
The remainder of this paper is structured as follows: Section 2 summarizes the related work on analysis localization methods of indoor ambient illumination. The notion of key active areas, characteristics of activity of daily living and some common localization challenges are described in Section 3. The introduction of the optical channel model and the Radiosity rendering method are given in Section 4. Section 5 is devoted to the detail of triangulation-based algorithm on a luminance filed map. Section 6 presents the overview of the proposed system. In Section 7, we describe the experimental environment and frequent ADL types. Section 8 discusses a series of experiments and reports the results. We conclude the paper in Section 9.

2. Related Work

In the VLC area, there are primarily two major hardware technologies supporting the algorithms to calculate the receiver coordinates. One technology form is a photodiode (PD) employed to detect received signal strength (RSS) information. As the distance varies according to the power attenuation, the receiver is coordinated by lateration algorithms [17]. Another technology form is by image sensor detecting angle of arrival (AOA) information for the angulation algorithm to calculate the receiver location [18,19].
Instead of relying on channel measurements, imaging techniques can be used to measure geometric relations between luminaries for localization [14]. However, imaging technique (camera and screen) has malfunctions of fast battery consumption and privacy issue.
A few recent simulation works are exploring in the visible-light positioning (VLP) area. In [14,20], image sensors are used to locate based on the lighting ray projection model. In [21], frequency division multiplexing was used for the peak-to-peak value of signals from different interferences stations, and time-difference-of-arrival (TDOA) was inferred based on phase difference of arrival. Proof of concept experiments are using two white light-emitting diodes (LEDs) as the transmitters. IDyLL [13], an indoor localization system using existing inertial and light sensors on smartphones, is underlying the light infrastructure as well. IDyLL does not use absolute intensity readings. Instead, peak detection is employed to help to infer the trajectory when the user is under a luminary. IDyLL builds upon existing work on pedestrian dead reckoning (PDR) (e.g., [22,23]), and the device in IDyLL requires to be facing up. Displacement is estimated through many factors, including step counts, stride length estimation and velocity estimation, heading orientation. In addition, the illumination peak detection algorithm is largely confined to light arrangements in building hallways. The inertial measurement unit (IMU) [24] relies on inertial sensors to track a user by continuously estimating displacement from a known location. Previous works require knowledge of receiver orientations to solve for a position [25,26]. In [15], this orientation information was obtained using a receiver with a six-axis IMU. A switching estimated receiver position scheme was proposed where the estimated receiver positions (ERPs) was switched based on the tilt angle of the receiver. Obviously, the tilt angle with the six-axis sensor limits the estimated error distance. Therefore, the accuracy of angulations data from six-axis sensor is critical for the accuracy of the estimated positions. In Luxapose [27], it requires a high density of overhead LED luminaries to be placed with known positions and identification beacons. A camera-equipped smartphone decodes the LED identifiers and determines the phone’s absolute location and orientation in the local coordinate system with an angle-of-arrival (AOA) localization algorithm. Time-of-arrival (TOA) and AOA measurements are applied in a single-anchor localization (SAL) method [28] to achieve high-accuracy multi-agent localizations.
Alongside, localization with a single LED or multiple LED by a trilateration method is discussed in [29]. Epsilon [29] employs a light sensor on a smartphone to retrieve the LED beacon information, measures the received signal strengths (RSSs) from multiple bulbs and computes the distances to each bulb through an optical channel model. Afterwards, location is estimated by decoding the beacon identifications. Nevertheless, certain optical channels where beacons are transmitted need to be free from interferences from ambient light such as sunlight and fluorescent light. Epsilon deals with the light sources at a similar height, while in reality, they may be deployed at any height as need.
In practice, such an ideal environment is rare. In a real usage, the receiver (hence the light sensor) may be in arbitrary orientation, which is considered as a complicated problem by [29]. Existing visible light localization systems require customized LED drivers to emit identity beacons, which increases the system cost. LiTell [30,31] enable visible light localization on unmodified existing light hardware, and these systems extract high-frequency features from fluorescent lights. However, similarly, the method only applies to fluorescent tube lights and the camera of smartphone must be held horizontally.
Pulsar [32] introduces an indoor visible light positioning system that adopts incumbent fluorescent lights/LEDs and lightweight PDs to achieve continuous 3D localization with sub-meter precision. Fortunately, LiLo addresses this common situation and utilizes it as a helpful feature for ADL recognition. LiLo leverages the integrated orientation sensors (e.g., IMU on a smartphone) to measure the device’s attitude. With the light value, a tuple (light value, phone’s attitude, and other features) is recorded into the system. Then, this tuple will be recognized as a location and its related ADL type.
Trilateration localization method in [26] computes the distances between a receiver and multiple light sources by varying the transmitting power. In this case, the LEDs is able to transmit ID code modulated. The work in [26] assumes that a receiver is located on the floor, LED bulbs are on the ceiling, as well as both the receiver axis and the transmitter axes are perpendicular to the ceiling. The authors of [33] offer landmarks with approximate room-level semantic localization depending on modulated LED bulbs. A VLC system using fluorescent lamps and photodiodes sensor has been proposed in [16], which can also estimate the 2D location. Here, a single photodiode (PD) was used to estimate the vertical and horizontal angles between the PD and the fluorescent lamps.
Furthermore, in [34], the proximity positioning concept is used to take a grid of transmitters as reference points with known coordinates. However, by nature, the accuracy cannot be better than the resolution of the source grid.
Fiatlux [35] performs a room-level localization using light sensors with assuming uniform lighting conditions. This assumption may not be valid all the time and users’ movements are constrained to very slow speeds in order to obtain a match. Moreover, the position of sensor is required to be fixed. Without special infrastructure for modulation, light bulbs with their associated locations can be identified with high accuracy by processing filtered frequency signals with machine learning algorithms [36]. SurroundSense [37] builds a map using several features found in typical indoor spaces, including ambient sound, light, color, etc., in addition to Wi-Fi RSS. This approach depends on calibration of the space of interest to construct a training dataset comprising RSS measurements at known positions.
RainbowLight [38] introduces a low-cost ambient light 3D localization approach. Exploiting the model that characterizes the relation for direction, light interference and spectrum, RainbowLight calculates the direction to a chip after taking a photo containing the chip. It presents a direction intersection-based method to derive the location with multiple chips. Although RainbowLight calls for less device configuration, it needs complex signal processing and recognition overhead in indoor environments. The basic context information employed by classifiers can be completely collected from context-related sensors. Mazilu et al. [39] implemented a framework that collects data from low-power ambient sensors. For continuous place detection, these ambient sensors will leave out typical energy-hungry location providers (GPS, network localization, or audio). Authors claim that the combination of light, temperature, humidity, and pressure sensors makes the footprint of a place recognizable. Data collection from these context-related sensors is directed into a C4.5 decision tree classifier that outputs the user’s semantic location (such as at home, at office, in car, etc.) The outcome of the system reaches the high-level place labels instead of precise GPS coordinates, thus, it is not beneficial to trajectory processing in fine granularity.
The user computing activity (keyboard keystrokes) is monitored with the use of ambient light sensors from a smart watch [40]. As stated, about ten dynamic discrete gestures are detected with high accuracy. In this work, the sensor works properly in neither very bright lights, nor very dark.

3. Characteristics of Key Active Area and Activity of Daily Living and The Status Quo

Key active areas (KAAs) are where the most frequent activities the resident conducts indoors, such as a PC desk in a reading room, an island table in a kitchen, a sofa area in a living room, a closet in a bathroom, etc. Each KAA usually corresponds to one ADL. The locations of KAAs are highly determined by the floor plan and furniture layout, imaging that people can only wash nearby a faucet and one can cook by a stove. The KAAs can be known according to the subject’s routing history.
Normally, the subject either stays at a certain KAA for a relatively long time, or moves to another KAA during a relatively short time. For example, to most residents living at home, writing on a desk or cooking usually takes longer than walking from the desk to the kitchen. The objective of LiLo is to localize the subject’s current KAA and recognize the corresponding ADL.
Mostly, a photodiode (PD) is able to give one reading representing the incoming light value. The reading changes either when the lighting environment changes or the phone moves to an extent. In our work, a step detector and accelerometer in a smartphone are employed to distinguish if the phone is still, moving at the current KAA, or transiting to another position. Moreover, when the phone is tending to move, the relative peak of light with the phone’s attitude is helpful for further localization.
In practice, there are some challenges need to be addressed, both to previous works and LiLo:
  • From the smartphone’s perspective: It is less possible to collect all the attitudes directly facing all the luminaires when the subject uses his phone normally, so triangulation (using angle of arrival information) is not always feasible. Similarly, as a trilateration example, the project [41] uses linear least square estimation by knowing the distances from several reference points (transmitters’ horizontal coordinates), while in reality, these distances are difficult to obtain if no photometric information is given. Furthermore, these distances computed by a theoretical optical channel model do not meet the typical room-level accuracy.
  • From the luminaires’ and occasions’ perspectives: First, for some interior lighting designs, an LED light array is widely installed on the ceiling. The high density of the array and a large number of luminaires make it difficult to sense the orientation of the one who delivers the peak light. Second, the luminaire on the ceiling is too weak to be considered as a spot light, while their light is scattering to the room as complete ambient light. Third, the attitude from the embedded six-axis sensor is not always accurate, which yields a large deviation when facing a magnetic inference. Fourth, the optical transmission channel and the luminaires’ photometry are not as ideal as that in theory.
  • From the user’s perspective: In experimental research, in order to obtain light information for each KAA, pre-collection is reasonable to conduct. Whereas, in real life usage, a pre-collection is not always allowed by the users. Besides, the usages for the luminaire varies among different inhabitants in terms of their routine habits and time periods in a day. Thus, how to localize in the initial step is a difficult problem. Technically, the phone’s attitude information is one of the crucial features for ADL recognition applications [42,43]. Nowadays, most models of smartphones already feature ambient light sensors (photodiodes) and six-axis sensor (geomagnetic sensor and gravity acceleration sensor). The ambient light (illuminance) sensor is visible on the face of the device. Thus, LiLo takes full advantage of Android smartphones to retrieve the light level with attitude to derive the indoor localization. Note that LiLo does not require additional infrastructure support or device modification beyond a standard smartphone. Light level is derived from the PD and attitude is derived from the six-axis sensor, all from the smartphone.
Most of the VLC-based techniques use LEDs as the light source, since they can be modulated more easily and hence, luminary ID with location data can be transmitted. In contrast, LiLo supports the various conventional light sources, including incandescent, fluorescent and LED luminaries, etc., to extract useful location information. To our knowledge, it is the first system that exploits conventional indoor luminaries for fine-grained indoor localization to recognize ADLs and demonstrate their usefulness experimentally.

4. The Radiosity Rendering Model

In this section, we will introduce the optical channel model and a method of rendering, Radiosity, based on a detailed analysis of light reflections off diffuse surfaces. For better illustration, let us introduce several types of light defined by computer graphics researchers:
  • Ambient light’s color scatters to all the objects in the scene globally;
  • Directional light shines from a specific direction, as if it is infinitely far away, so the rays are considered as all parallel. The sun is a typical light source;
  • Hemisphere light source positions directly above the scene;
  • The ceiling light is more similar to a hemisphere light source;
  • The point light is at a specific position in the scene, and light shines in all directions;
  • The spot light is a point light that can cast a shadow in one direction within a falloff cone.
The original Radiosity system was developed in [44]. This module calculates the light exchange between luminaires and any other surfaces (direct lighting) and the light exchange between illuminated surfaces (indirect lighting). Not only the direct lighting emitted by a certain luminary can be calculated, but also light from the sky (daylight) or direct sunlight can also be calculated with the calculation kernel. Based on the energy conservation principle, the premise is that any light which is projected onto a surface and is not absorbed will be re-emitted by this surface. In our work, the Radiosity system is employed to compute the light level on a target height in rooms.

4.1. Discrete Radiosity Overview

Surfaces are assumed to be perfectly Lambertian (diffuse), which reflects incident light in all directions with equal intensity. With the radiosity method, an equation is created for each surface. The scene is divided into a set of small areas, or patches. The radiosity, B i , of patch i is the total rate of energy leaving a surface, and the radiosity over a patch is constant.
This equation defines the light emitted which is a product of light absorbed from other surfaces and, if present, from its own luminance. Altogether this provides a set of equations whose solution represents the brightness of each individual surface. Thus, the reflected light which is perceived is a combination of multiple light sources [44]. We separate the scene into n patches, over which the radiosity is constant
B i = E i + ρ i j = 1 N F i j B j , i = 1 n
where B i is the light leaving patch i, E i is the light emitted from patch i, ρ is the material reflectivity, the reflectivity of surface i, which will absorb a certain percentage of light energy which strikes the surface. F i j is the form factor, the fraction of light energy leaving patch j that arrives at patch i, which is determined by both geometry (size, orientation, and position of the two patches) and visibility, such as any existing occlusions in between.
n simultaneous equations with n unknown B i values can be written in matrix form:
1 ρ 1 F 11 ρ 1 F 12 ρ 1 F 1 n ρ 2 F 21 1 ρ 2 F 22 ρ 2 F 2 n ρ n F n 1 ρ n F n 2 1 ρ n F n n B 1 B 2 B n = E 1 E 2 E n
The “full matrix” radiosity solution calculates the form factors between each pair of surfaces in the environment, then forms a series of simultaneous linear equations.
A single radiosity value B i is for each patch in the environment so that a view-independent solution is computed. The radiosity of a single patch i is updated for each iteration by gathering radiosities from all other patches:
B 1 B 2 B i B n t + 1 = E 1 E 2 E i E n + ρ 1 F 11 ρ 1 F 12 ρ 1 F 1 i ρ 1 F 1 n ρ 2 F 21 ρ 2 F 22 ρ 2 F 2 i ρ 2 F 2 n ρ i F i 1 ρ i F i 2 ρ i F i i ρ i F i n ρ n F n 1 ρ n F n 2 ρ n F n n ρ n F n n   B 1 B 2 B i B n t
where t is the t-th iteration. This method is fundamentally a Gauss–Seidel relaxation.
The geometric terms in the form-factor derivation are illustrated in Figure 1. For non-occluded environments, the form-factor between finite surfaces (patches) is defined as the area average and is thus:
F i j = 1 A i A i A j cos θ i cos θ j π r 2 V i j d A j d A i
the “form factor” between surfaces i and j, which accounts for the physical relationship between the two surfaces, where A i is the area of surface i, r is the vector from patch i to j, r 2 is the square of distance r. θ i is the angle between normal i and vector r. V i j is a boolean visibility function between patch i and j, taken either as 0 if point on i is occluded with respect to point on j, or as 1 if unoccluded. As the reciprocity law says:
A i F i j = A j F j i
The “radiosity equation” describes the amount of energy which can be emitted from a surface, as the sum of the energy inherent in the surface (a light source, for example) and the energy which strikes the surface, being emitted from some other surfaces.

5. Localization Based on Luminance Field Map with Triangulation

The Android operating system is set up to calculate a rotation matrix R, which is defined by R = E x E y E z N x N y N z G x G y G z , where x, y, and z are axes relative to the smartphone, see Figure 2, and where
E = E x , E y , E z = a unit vector which points East ; N = N x , N y , N z = a unit vector which points North ; G = G x , G y , G z = a unit vector which points away from the center of the earth ( gravity vector ) .
The Euler angles ϕ , θ , and ψ in Android operating systems are defined as
azimuth ϕ : rotation about G , z axis ; pitch θ : rotation about E , x axis ; roll ψ : rotation about N , y axis . .
Thus, the 3 × 3 rotation matrix R is expressed in terms of Euler angles,
R = cos ϕ sin ψ sin ϕ sin ψ sin θ sin ϕ cos θ cos ϕ sin ψ + sin ϕ cos ψ sin θ sin ϕ cos ψ cos ϕ sin ψ sin θ cos ϕ cos θ sin ϕ sin ψ + cos ϕ cos ψ sin θ sin ψ cos θ sin θ cos ψ cos θ .
The unit vectors in the direction of the x , y , and z axes of a three-dimensional Cartesian coordinate system are
i ^ = 1 0 0 , j ^ = 0 1 0 , k ^ = 0 0 1
A direction vector can be transformed between rotated reference frame from the device coordinate system to the world’s coordinate system with the rotation matrix R, which helps to rotate vectors:
V G = R V D
where V D is a vector V measured in the frame of reference of the device; V G is a vector V measured in the frame of reference of the global world.
The unit vector points away from the face of the smartphone is k ^ , and its transformation in the world frame of reference is
k G = R k ^ = cos ϕ sin ψ + sin ϕ cos ψ sin θ sin ϕ sin ψ + cos ϕ cos ψ sin θ cos ψ cos θ
In this section, we will introduce the principle of a triangulation algorithm for location. A scenario with a set of light bulbs and a mobile terminal in a room is illustrated in Figure 3.
Similar to that in reality, the light bulbs could be anywhere other than just situated on the ceiling. The optical channels considered here are all line-of-sight (LOS) links.
Mostly, the distances from reference points (transmitters’ horizontal coordinates) are unknown. Therefore, this system relies on the angle of arrival (AOA) instead of distance computations.
Three luminaires and a smartphone are exhibited in Figure 3. The known coordinates of the luminaire in the global world’s coordinate system are ( x 1 , y 1 , z 1 ), ( x 2 , y 2 , z 2 ), and ( x 3 , y 3 , z 3 ). The following part depicts how to estimate the coordinate of the smartphone ( x p , y p , z p ) by the coordinates of reference points and the attitude of this smartphone. To start with, the vectors can be represented as
O L 1 O P = P L 1 = λ 1 k G R 1 O L 2 O P = P L 2 = λ 2 k G R 2 O L 3 O P = P L 3 = λ 3 k G R 3
Thus, | P L i | is the distance from a reference point i to a certain device P.
( x 1 , y 1 , z 1 ) ( x p , y p , z p ) = λ 1 ( Δ x r 1 , Δ y r 1 , Δ z r 1 ) ( x 2 , y 2 , z 2 ) ( x p , y p , z p ) = λ 2 ( Δ x r 2 , Δ y r 2 , Δ z r 2 ) ( x 3 , y 3 , z 3 ) ( x p , y p , z p ) = λ 3 ( Δ x r 3 , Δ y r 3 , Δ z r 3 )
Equation (13) is a restatement of Equation (12), with the subtractions expanded in terms of the elements of the vectors. Therefore, the z coordinate is computed from the three reference points.
z 1 λ 1 Δ z r 1 = z p z 2 λ 2 Δ z r 2 = z p z 3 λ 3 Δ z r 3 = z p z 1 λ 1 Δ z r 1 = z 2 λ 2 Δ z r 2 = z 3 λ 3 Δ z r 3
where λ is the scaling factor to be computed.
The location validation will be finalized when it satisfies such conditions.
| ( z 1 λ 1 Δ z r 1 ) ( z 2 λ 2 Δ z r 2 ) | l e n g t h P h o n e / 2 | ( z 1 λ 1 Δ z r 1 ) ( z 3 λ 3 Δ z r 3 ) | l e n g t h P h o n e / 2 | ( x 1 λ 1 Δ x r 1 ) ( x 3 λ 3 Δ x r 3 ) | l e n g t h P h o n e / 2 | ( y 1 λ 1 Δ y r 1 ) ( y 3 λ 3 Δ y r 3 ) | l e n g t h P h o n e / 2 0 x p l e n g t h S p a c e 0 y p w i d t h S p a c e 0 z p h e i g h t S p a c e
l e n g t h P h o n e is the maximum value of the dimension of a smartphone (length, width, height). l e n g t h S p a c e , w i d t h S p a c e , and h e i g h t S p a c e are the length, width, height value of a room. The solution of this Equations (14) and (15) gives points of circles intersection, predicting a zone of indoor localization. Throughout this work, a smartphone only needs to be used as normal. The data delay of sensors in Android system is ranging from 0 to 200,000 microseconds, which is much shorter than the timespan of a human behavior. The light and orientation sensors are sensitively working all the time. As data packages increase, the orientation values with the top-k highest light values can be obtained with certainty.
Algorithm 1 shows the algorithm for localization computation of the smartphone. This algorithm initializes from the unit vector described in Equation (11), attempts to collect the sets of x-coordinate and z-coordinate based on the Equations (14) and (15), it concludes the collections of the circles’ intersection, predicting a zone of indoor localization.
Algorithm 1: Phone localization calculation
Electronics 11 02503 i001
Algorithm 2 shows the algorithm for location validation of the smartphone. In order to estimate the precise position, at least two luminaires detected in the space are necessary. Afterwards, the estimated position can be validated if more luminaires are detected. Accordingly, after receiving the arguments from Algorithm 1, this algorithm applied the constraints of Equation (15) to yield the valid postion.
Algorithm 2: Phone localization validation
Electronics 11 02503 i002
Yet, when a smartphone detects solely one luminary in the space, it is still possible to localize the phone. The changing light value will be detected when the smartphone moves. First, when it reaches the peak light value, the attitude of the phone is retrieved by the six-axis sensor. Along the direction towards to the luminaire, we can draw a line from the device to the luminaire. Different luminaires will yield different light volumes, which in turn will validate the location result. In the localization period, when new recordings with the approximately same direction are detected, by comparing with the historical records of the light values and the attitudes, this system is able to relocate those clusters under the same direction.

6. System Overview

In this section, the discussion will point to the LiLo system, which tackles the challenges abovementioned in Section 3. The system diagram is described in Figure 4.
(1)
Initialization of illuminance field map: According to information of house structure, furniture layout, and luminaire information, an indoor light field map is generated by the Radiosity algorithm described in Section 4.1. For instance, the luminance field map of a bedroom is shown in Figure 5, computed by the Radiosity algorithm. In this case, all the luminaires in this scene are ON, and the height of the plane computed is 70 cm, which is as high as much of the KAAs on a desk. As is shown in this colormap, the computed luminance at KAAs t10, t11, t12 (see Figure 6) are 5–10 units of measurement, 30–75 units, and 10–30 units, respectively. The computed luminance values at KAAs are recorded and sorted with respect to different facing azimuth angles of a smartphone.
The facing azimuth angle of the smartphone is mostly subject to the furniture layout. Normally, without frequently changing of the furniture layout, the subject and their phone always head toward the roughly similar direction at each KAA. As shown in layout Figure 7, south is the most likely orientation at p9, and east is at p1. As illustrated in Figure 4, four frequency luminance lists of different azimuth angles (north, east, south, and west) are generated. Each list has the KAA IDs in the descending order of the sorted luminance. Gathering the ADL orientations in Figure 7 and the illuminations in Figure 6, both the orientations of ADL at KAA p6 (washing dishes in the kitchen) and p3 (washing in the bathroom) are west-bound, but the illumination at t4 is around 45 unit and that at t7 approximates 64 unit. Although p3 and p6 are in the same frequency illuminance sub-list (west), the illumination differences lead to conclude that “washing dishes in the kitchen” happens at KAA p6 and “washing in the bathroom” happens at KAA p3.
Step by step, the most frequent heading orientation of the phone at each KAA can be learned and retrieved as historical data grow. Nevertheless, if the assumption does not satisfy all the time, the localization in the cold start can be initialized by the AOA algorithm in Section 5.
(2)
Ambient light, geomagnetic field, and orientation sensor data collection on the fly: The client (smartphone) generates and bundles a series of ambient Light, Orientation, Time, and Step information (LOTS), then it sends the information packages to a reference cloudlet server, which holds the illuminance field map of a floor plan. Note that the Android system computes the orientation angles by using a device’s geomagnetic field sensor in combination with the device’s accelerometer (https://developer.android.com/guide/topics/sensors/sensors_position and https://developer.android.com/guide/topics/sensors/sensors_overview, accessed on 14 January 2019). Intuitively thinking, light level represents the surrounding environments, orientation represents the subject’s facing direction. Increasingly, ADL patterns and light usage are changing across different times of day as each ADL takes for a while at each KAA. Walking transition between every two KAAs would affect the sensor–accelerometer-based step counter, the incremental value of which effectively suggests segmenting two sequential ADLs.
These coming LOTS packages have a two-fold function. On the one hand, the new LOTS packages are streaming to be classified by the trained model for a location estimation. On the other hand, they serve as the candidate to derive a KAA candidate label after comparing in a KAA-based frequency luminance list. The frequency demultiplexer serves as a dispatcher to feed the LOTS packages to multiple frequency luminance lists. The AOA multiplexer combines the frequency luminance lists for persistence as historical data. In the meantime, the candidate label is validated by the AOA algorithm in Section 5.
This label record is fed into a historical database to update the labels in the same azimuth group. After a preset period of time, this system keeps updating a light-level model for every KAA in this floor, as shown in Figure 6. The necessity of the database updating is primarily to keep track of the abnormality, such as changes of interior furniture layout, dysfunction of any luminaire. Once it happens, the information within the frequency luminance lists is subject to updating accordingly. Thus, this further leads to a new training model.
(3)
Autocalibration to upgrade an advanced illuminance field map: The LOTS packages at each KAA are archived into a database no matter which direction the smartphone is heading. Now that the illuminance field map only takes effect when the smartphone approximately faces up, the selected recordings in the reference package to compare in a luminance list only include the facing-up ones. With the real LOTS data of the phone, the illuminance field map generated at the cold start is upgraded to an advanced modality with eight dimensions, including light level (l) at (x,y,z) position in the map, (azimuth, pitch, roll) gyroscope information on the (x,y,z) position, at (t) time. Further, multi-dimension KAA-based frequency luminance lists are stored as the templates in the following location estimation stage.
(4)
Location estimation: In practice, the client sends the LOTS packages to the reference server. During the initial inference phase, the location is determined using the illuminance field map with orientation knowledge. After the cold-start phase, the server is able to compare the measured light value and geomagnetic field signals with the records in the advanced illuminance field map database. Eventually, the location results are computed from a machine learning classifier.

7. Experiment Environment

Here, we give one apartment to illustrate the experimental process. This work took place in an actual apartment of size 800 ft 2 as the living environment, the layout of which is shown in Figure 7, containing a bedroom (position p1), a bathroom (position p3), a living room (position p8) with a combined kitchen (position p5). The locations of luminaires are marked with hollow squares in the floor plan map. The photometry of the luminaires is displayed in Table 1. In this luminance environment, the beam angle of luminary l8 is ±30 degree; and that of luminary l0 is ±180 degree, so luminary l0 is a point light; all other luminaires have beam angles of nearly ±90 degree.
The smartphone used is Google Nexus 5 with the Android 6.0.1 Marshmallow operating system. This model features sensors for geomagnetic information, orientation information, and light level.

Activities of Daily Living in the Venues

The typical activities of daily living that frequently happen in our daily life are introduced in this section. Mostly, the locations of ADLs are largely under the constrains of the furniture layout and appliances. For example, one can only do washing up by a faucet. The circles are ADL capturing points, and the arrow around circle denotes the most frequent facing orientation in each.
  • * Working on a PC—The major pieces of furniture in the bedroom are a long combined desk and a bed. The facing orientation of ADL is much subject to the furniture layout. In this case, the subject often works on their computer and reads some material at position p1 facing east.
  • * The subject usually falls asleep, takes a nap, or reads on his smartphone at any time in the bed. The illuminance environment at position p2 varies in time. It has a relatively lower light level in the daylight, while it has a relatively higher light level in the night when luminary 8 turns on.
  • * Hygiene activities (two types)—Once the inhabitant is inside the bathroom, he performs normal hygiene activities at position p4, facing either west or east, or washing at position p3 facing east.
  • * Cooking—At position p5, the subject cooks, chops, and prepares food facing south, which is toward the direction where the stoves are located.
  • * Washing dishes—At position p6, the subject washes dishes, vegetables, fruit, etc., with the heading orientation of west.
  • * Eating—The sofa at position p9 is most frequently used when the resident has meals for best convenience, including breakfast, lunch, dinner, and mid-night snack. The smartphone is usually placed on the sofa table or on the sofa, t3 and t2, respectively, as shown in Figure 6.
  • * Dressing up—At the entrance hall (position p7), the subject usually selects clothes by the wardrobe, and puts on or removes shoes by the shoe storage cabinet.

8. Experiments

Later, we evaluate the performance of LiLo in three sections. Section 8.1 is designed to validate how well LiLo can locate users under different luminaire arrangements across different KAAs in an apartment.
Section 8.2 and Section 8.3 mine the historical data in real-world scenarios, from the ADL Recorder App project [42]. This experiment aims to answer whether the proposed LiLo system contributes to location accuracy and ADL recognition.
Section 8.4 tests the performances of different machine learning classifiers for fine recognition in defined spaces.

8.1. Localization Based on Orientation, Light Level, and Time

Data were precollected from the Light Meter App running on Nexus 5 smartphone. The Light Meter, developed by our group, detects ambient light level values, three attributes from a geomagnetic field sensor, and three attributes from an orientation sensor. It encapsulates the data and sends packages to the reference cloudlet server. At each KAA, the smartphone is rotated arbitrarily to detect data from multiple directions, so a number of records with different phone attitudes are collected.
The light levels at multiple KAAs are tested, and different illuminations are displayed in various colors, as shown in Figure 6.
For each KAA, the illumination is determined by the different ON/OFF status combinations of surrounding luminaire, so the possible light situations at each KAA are recorded. For instance, at KAA p6, the most contributive light source are the lamp above the faucet (l2), the kitchen ceiling lamp (l3), and the lamp under the microwave oven (l5), as illustrated in Figure 7. The data in p6 are from eight combinations of luminaire (l2, l3, and l5) status, where each luminary is either ON or OFF.
The total number of instances collected in this experiment is 24,631. Six comparison sessions are analyzed and the model is trained with a Bayesian Network classifier in the WEKA [45]. Sessions (a) and (b) are analyzed with data from both the geomagnetic field sensor and the orientation sensor. Sessions (c) and (d) are analyzed with data only from the orientation sensor. Sessions (e) and (f) are analyzed with data only from the geomagnetic sensor. Sessions (b), (d), and (f) are analyzed with data only when the smartphone faces up approximately, with both roll and pitch angles are less than 15 degrees.
Testing data are evaluated on training data. Two metrics are considered here. Mcr represents misclassification rate and crossTypeMcr represents the Mcr happening among different types. The types here mean different combinations of light status (ON or OFF). For example, the misclassification between different light situations at KAA is not regarded as an error, because the KAA result is identical. Thus, crossTypeMcr draws more attention to the KAA-level recognition. Additionally, Mcr focuses on not only the KAA recognition, but also the status of the surrounding luminaire. The comparison result is shown in Figure 8.
In general, the recognition result of the facing-up smartphone is better than the multi-direction data, mainly because the ambient light sensor retrieves more precise illuminance information when the phone faces up. Furthermore, the combined usage of both geomagnetic field sensor and orientation sensor outperforms using the orientation sensor alone, which also works better than using geomagnetic field sensor alone. The best performance is obtained using data from both geomagnetic field sensor and orientation sensor when the smartphone is facing up, mainly because more valuable information is fed into the classifier.

8.2. ADL Recognition Based on Orientation, Light Level, and Time

8.2.1. Experimental Setup and Data Collection

Real-time recordings from a variety of different ADL scenes were generated from the ADL Recorder App [42]. The App project aims to recognize ADLs only via a single-point Android-based smartphone, which captures the ADL types with multiple sensing data, including light level, azimuth angle, time, Wi-Fi RSSI values, and so on. A Nexus 5 smartphone with the Android system (Marshmallow 6.0.1) is used as the experimental device. The assumption for this experiment is that the smartphone is used normally and placed by the subject at every KAA.
Attributes used in the ADL recognition stage include time, orientation, and light level. The time attribute here plays a more critical role than timestamps, as natural factors also impact the lighting environment. For example, the light volume through windows is different from daylight to night. Furthermore, the luminaire usage varies from time to time. One could turn on a desk lamp for reading at night, and afterwards turn on the floor lamp for wandering in the room.
The classification performance was evaluated using leave-one-out cross-validation, where a classifier is trained with all instances except the one that is left out for classification. In this way, the training data are maximally utilized, even though the system has never experienced that particular recording before. The overall recognition rates were calculated as the sample mean of the recognition rates of the individual scenes.
In Table 2, the different scenes and the number of recordings from each scene are listed. The recordings are categorized into five general classes according to locations of the scenes (bathroom, kitchen, dining room, bedroom, and some public places). The first four scenes happen in the apartment as in Figure 7, and the public places include four buildings far away located in the city of this experiment—Ames, Iowa. The apartment in the experiment is located in the northeastern area of the city. “Amespubliclibrary” represents the Ames public library located in the center of downtown. “PCoffice” represents that the subject is working in the office in in Atanasoff Hall, with is in the center of the campus. “Gilman1353” represents classroom 1353 in Gilman Hall on the campus. “RossHall” represents a classroom in the basement of Ross Hall in the eastern area of the campus.
After preprocessing the data, and deriving the location information from the combination of Wi-Fi RSSI, an ARFF (Attribute-Relation File Format) file is produced and fed into the WEKA. The training set includes all the sensing data records (21 different scenes, 4388 samples), and the test set uses the same training set. In this session, we trained and tested the dataset with Bayesian network classifiers. The Wi-Fi-based algorithm [46] was applied to generate predictions in a low-level granularity (main context in Table 2), and light level signal was computed to retrieve a fine-granularity prediction (Scenes in Table 2) as expected.

8.2.2. Recognition Accuracy and Discussions

The confusion matrix for 21 classified scenes using the Bayesian network classifiers is presented in Figure 9. The rectangular boxes enclose the more general contexts, as presented in Table 2. The overall recognition rate was 89.93% for analysis 4388 instances on attributes of {light, orientation angle, hour, location}. The boxes enclose more general classes with high-level location. In the meantime, the recognition rate of general contexts level is 99.84%.
-
The type of “Bathroomflushing” is the most common type for mis-classifications, because “Bathroomflushing” and “Bathroomfaucet” usually take place near position p3 at the same time. However, after the audio process stage in [47], misclassifications are largely modified, because the sound of “Bathroomflushing” type is distinct from that of “running water from faucet”, “Pee”, and “Bowel”;
-
The types of “Bathroomfaucet” and “Bathroompee” are nearly 30% misclassified in between. Basically, because the two ADLs positions are close to each other, both near position p3 in Figure 7, the light level and orientation are roughly similar, and even the two ADLs occur sequentially;
-
The ADLs of “chopping”, “cooking”, and “washing dishes” usually interweave, and the actual spots are close to each other (in the neighborhood of positions p5 and p6), so that the illuminance atmosphere, generated from the luminaire {l2, l3, l5}, are more or less the same. Similarly, audio processing is helpful for misclassifications improvement;
-
All “eating” ADLs happen in the living room, and a slight misclassifications exist between “dinner” and “mid-night snack”, as the shared the position p9 and occurred subsequently;
-
“Lunch” is sometimes misclassified as “dinner”, partially because of mislabeling. The subject sometimes reported the second meal in a day as “lunch”, no matter when the meal was, even after 5:00 p.m.;
-
The “getting up”, “nap”, and “sleep” are misclassified as “Working on PC at home” due to the same orientations and venues shared position (p1 and p2). Additionally, the subject usually “works on PC at home” any time in a day;
-
Interestingly, the attributes of time and light level are helpful for the recognition of “dinner in bedroom”, although the orientations and venues (position p1) are roughly the same with those of “working on PC at home”. Specifically, “Dinner in bedroom” takes place at t12 in Figure 6, and “working on PC at home” happens at t11. The distance between the two venues is just less than half a meter.
The recognition accuracy of scenes in a location level ranged from 99.7% (bathroom) to 100% (public places). Basically, the illuminance atmosphere and time in each place are distinct. Note that the satisfactory performance is gained without usage of GPS information, while the localization for the “public places” is still of high accuracy.
If the attributes fed into the Bayesian network classifier is {light level, orientation, and hour}, leaving out the location estimation based on Wi-Fi RSSI, the overall correct classification rate is 76.16 %. Additionally, the classification rates of the five location-level general categories are 79.78%, 70.07%, 93.27%, 83.51%, and 96.23%. The most major misclassification is “kitchen”-related to “dining room”-related. Furthermore, the majority of ADL-level mis-classification is “cooking” to either “dinner” or “lunch”. The possible reason is that the functional places are connecting with each other and the luminaire l3 and l4 illuminate the entire place.

8.3. Comparison of Recognition Rates Using Different Attributes

The dataset of this experiment is the same as that in Section 8.2, which has 4388 samples from 21 different scenes. In the experiments (a)–(g), we trained and tested the dataset with Bayesian network classifiers; and in experiment (h), we select J48 decision tree algorithm as the classifier. The pre-recognized location is derived from the Wi-Fi localization algorithm via SVM method. The performance comparison is shown in Figure 10.
Suppose via any Wi-Fi-based localization algorithm, we can obtain nearly high-accuracy locations, the results show that the “light” feature positively contributes to improving the performance of localization. By adding the attribute of “light”, the ADL-level recognition rate increases from 87.03% (b) to 89.93% (c).
However, retrieving the accurate locations by Wi-Fi localization is not always plausible. There are two major causes: first, Wi-Fi RSSI value usually shifts transiently, sometimes Wi-Fi connections are even lost; second, mostly, in a house Wi-Fi RSSI status of each room has less distinguishing features, because of the near distances from each room and the wall blockage of the signal. From the comparison of the experiment (d) and (e), by adding the attribute “light”, the ADL-level recognition rate grows from 69.07% to 76.32% distinctly. Additionally, the location-level recognition rate obviously increases from 77.80% to 83.89%, as well. That is to say, “light” attribute substantially contributes to the performance.
The essential combination of {light, azimuth angle, hour} (a) is a solution to recognize both ADL types and locations without the Wi-Fi-based localization algorithm process. The performance improvement of this basic combination from that of importing attribute of inaccurate “pre-recognized locations” (e) is negligible. Without the Wi-Fi RSSI-based process, it saves the battery consumption of scanning Wi-Fi RSSI, the storage of the combination of RSSIs, and the computation overhead of the pre-recognition process.
From the comparison of experiments, including nearly correct pre-recognized locations (b and c) or inaccurate locations (d and e), it is indicated that the attribute “light” contributes to performance improvement under both cases. With the attribute of three-axis “acceleration” data, we can see that the performance result of the experiment (g) grows further. Here, the accuracy of 84.23% is consistent with that of the Section 8.1 session (e) shown in Figure 8.
In the experiment (h), the J48 decision tree algorithm is selected to classify the ADL types and locations, the performance grows to an acceptable 93.92% and 96.74%, respectively.

8.4. Recognition in a Given Space by Different Classifiers

There are two sessions in this experiment, one for a living room and kitchen, the area is 7.6 m × 3.4 m; the other one is for a bedroom, the area is 3.5 m × 3.5 m; From the floor plan in Figure 7, only by a Wi-Fi-based localization algorithm, it is hard to differentiate the area between kitchen and living room areas, due to less salient signal features. Hence, the objective of this experiment is to validate the recognition rate of the precise areas in a given room.
From the floor plan in Figure 6, in the living room, the ADLs usually happen in two KAAs. The “eating”-series ADLs, including “breakfast”, “lunch”, “dinner”, and “mid-night snack”, take place on the tea table in KAAs of t2 or t3 region; the ADLs of “chopping”, “cooking”, and “washing dishes” usually happen in the kitchen neighboring KAA t5.
In the bedroom, the ADLs usually happen in three KAAs. The ADLs of “getting up”, “napping”, and “sleeping” take place in the bed near KAA t10; The ADL of “working on PC at home” usually happens on the desk near KAA t11; Sometimes, the subject has “dinner in the bedroom” in the KAA t12 region. These KAAs above are considered as the location-level measurement.
We trained the data with attributes of {light level, azimuth angle, 3-axes accelerometer, and time} via the WEKA, the Bayesian network and the J48 tree are selected as the classifiers. The comparison is shown in Figure 11. The metric of “ADL-level” recognition is considered as correct if and only if the specific ADLs, such as “getting up”, “napping”, etc., are recognized correctly. Likewise, the metric of “location-level” recognition is considered as correct if and only if the specific locations of ADLs (e.g., KAA t3, t5, t10, etc.) are recognized correctly.
From the comparison result, in such a small bedroom, the recognition rate of both the ADL-level and the location-level gains above 94%. Note that the distance between t10 and t11 is as close as less than 40 cm, and the misclassification rate in between is less than 1.9% under J48 tree classifying. The distance between t11 and t12 is less than 60 cm, and the ADL-level misclassification rate in between is less than 4% under J48 tree classifying.
The reason of higher recognition accuracy rate in a small room, compared with in multiple rooms, is partially due to fewer ADL categories. Besides, the fewer luminaries with the similar luminance characteristics is one factor resulting in classification errors. For example, the light coming through the window in the living room or through the window in the bedroom in the afternoon are of less difference. The properties of light (especially luminous intensity) of the ceiling light in the living room, l4 in Figure 7, is similar to that of ceiling light l9 in the bedroom, because they have the similar heights, luminance intensities, and color temperatures.
Hereby, performance comparison of LiLo with the results reported in prior work is shown in Table 3.

9. Conclusions

We first present and develop LiLo, a light-based localization system, to estimate smartphone users’ various activities of daily living (ADLs) via machine learning algorithms. This service employs ambient light level, with orientation information obtained all by a single-point smartphone, to estimate the indoor positioning with high accuracy. Overall, no extra heavy infrastructure is involved. Moreover, this application does not impose much extra burden and battery consumption to the phone, because the ambient light sensor and orientation sensor keep running in the background all the time. This project concentrates upon the localization on key active area (KAA), where most frequent ADLs take place. In addition, we tackle the technical challenge of localization in the cold start without certain reference records, and solve the problem of less likelihood to collect data in the off-line phase. LiLo takes full advantage of characteristics of the surrounding environment to generate a luminance field map by the Radiosity algorithm. On the server end, in order to achieve a more realistic indoor luminance field and luminance on KAA, we do consider light propagation between surfaces in the scene. Hence, compared with the previous works, LiLo is able to deal with the spots in storage closets. We have validated the performance of LiLo in both an experimental environment and a real-world environment. Our results show that the overlooked light from conventional off-the-shelf luminaire adds many potential features into the environment and LiLo achieves decimeter indoor location accuracy for regular smartphones. The efficient management of an illuminance field map at different periods of time in a day is one of our future tasks.

Author Contributions

Conceptualization, Y.F. and C.K.C.; methodology, Y.F.; software, Y.F. and J.W.; validation, J.W., Y.F. and C.K.C.; formal analysis, C.K.C.; investigation, Y.F.; resources, Y.F. and J.W.; data curation, Y.F. and J.W.; writing—original draft preparation, Y.F.; writing—review and editing, J.W. and Y.F.; visualization, J.W. and Y.F.; supervision, Y.F.; project administration, Y.F.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the General Young Talents Project for Scientific Research grant of the Educational Department of Liaoning Province (LJKZ0266), and the Research Support Program for Inviting High-Level Talents grant of Shenyang Ligong University (1010147001010).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zandbergen, P.A. Accuracy of iPhone locations: A comparison of assisted GPS, WiFi and cellular positioning. Trans. GIS 2009, 13, 5–25. [Google Scholar] [CrossRef]
  2. Hightower, J.; Borriello, G. Location Systems for Ubiquitous Computing. Computer 2001, 34, 57–66. [Google Scholar] [CrossRef] [Green Version]
  3. Kavehrad, M.; Weiss, W.L. Indoor positioning by light. In Proceedings of the 2015 IEEE Summer Topicals Meeting Series (SUM), Nassau, Bahamas, 13–15 July 2015; pp. 37–38. [Google Scholar] [CrossRef]
  4. Chung, J.; Donahoe, M.; Schmandt, C.; Kim, I.J.; Razavai, P.; Wiseman, M. Indoor Location Sensing Using Geo-magnetism. In Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services, Bethesda, MD, USA, 28 June–1 July 2011; ACM: New York, NY, USA, 2011; pp. 141–154. [Google Scholar] [CrossRef] [Green Version]
  5. Chen, Y.; Lymberopoulos, D.; Liu, J.; Priyantha, B. FM-based Indoor Localization. In Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services, Ambleside, UK, 25–29 June 2012; ACM: New York, NY, USA, 2012; pp. 169–182. [Google Scholar] [CrossRef]
  6. Blochliger, F.; Fehr, M.; Dymczyk, M.; Schneider, T.; Siegwart, R. Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 3818–3825. [Google Scholar] [CrossRef] [Green Version]
  7. Lopez-Rodriguez, F.M.; Cuesta, F. An Android and Arduino Based Low-Cost Educational Robot with Applied Intelligent Control and Machine Learning. Appl. Sci. 2021, 11, 48. [Google Scholar] [CrossRef]
  8. Kotaru, M.; Joshi, K.; Bharadia, D.; Katti, S. SpotFi: Decimeter Level Localization Using WiFi. In Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication, London, UK, 17–21 August 2015; ACM: New York, NY, USA, 2015; pp. 269–282. [Google Scholar] [CrossRef]
  9. Vasisht, D.; Kumar, S.; Katabi, D. Decimeter-Level Localization with a Single WiFi Access Point. In Proceedings of the 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16), Santa Clara, CA, USA, 16–18 March 2016; USENIX Association: Santa Clara, CA, USA, 2016; pp. 165–178. [Google Scholar]
  10. Seçkin, A.; Coşkun, A. Hierarchical Fusion of Machine Learning Algorithms in Indoor Positioning and Localization. Appl. Sci. 2019, 9, 3665. [Google Scholar] [CrossRef] [Green Version]
  11. Youssef, M.; Agrawala, A. The Horus WLAN Location Determination System. In Proceedings of the 3rd International Conference on Mobile Systems, Applications, and Services, Seattle, WA, USA, 6–8 June 2005; ACM: New York, NY, USA, 2005; pp. 205–218. [Google Scholar] [CrossRef]
  12. Bahl, P.; Padmanabhan, V.N. RADAR: An in-building RF-based user location and tracking system. In Proceedings of the INFOCOM 2000, Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies, Tel Aviv, Israel, 26–30 March 2000; Volume 2, pp. 775–784. [Google Scholar] [CrossRef]
  13. Xu, Q.; Zheng, R.; Hranilovic, S. IDyLL: Indoor Localization Using Inertial and Light Sensors on Smartphones. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Osaka, Japan, 7–11 September 2015; ACM: New York, NY, USA, 2015; pp. 307–318. [Google Scholar] [CrossRef]
  14. Yoshino, M.; Haruyama, S.; Nakagawa, M. High-accuracy positioning system using visible LED lights and image sensor. In Proceedings of the 2008 IEEE Radio and Wireless Symposium, Orlando, FL, USA, 22–24 January 2008; pp. 439–442. [Google Scholar] [CrossRef]
  15. Sertthin, C.; Tsuji, E.; Nakagawa, M.; Kuwano, S.; Watanabe, K. A Switching Estimated Receiver Position Scheme For Visible Light Based Indoor Positioning System. In Proceedings of the 4th International Symposium on Wireless Pervasive Computing, Melbourne, Australia, 11–13 February 2009; pp. 1–5. [Google Scholar] [CrossRef]
  16. Liu, X.; Makino, H.; Maeda, Y. Basic study on indoor location estimation using Visible Light Communication platform. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 2377–2380. [Google Scholar] [CrossRef]
  17. Yang, S.H.; Jeong, E.M.; Kim, D.R.; Kim, H.S.; Son, Y.H.; Han, S.K. Indoor three-dimensional location estimation based on LED visible light communication. Electron. Lett. 2013, 49, 54–56. [Google Scholar] [CrossRef]
  18. Tanaka, T.; Haruyama, S. New Position Detection Method Using Image Sensor and Visible Light LEDs. In Proceedings of the Second International Conference on Machine Vision, Dubai, United Arab Emirates, 28–30 December 2009; pp. 150–153. [Google Scholar] [CrossRef]
  19. Moon, M.G.; Choi, S.I. Indoor position estimation using image sensor based on VLC. In Proceedings of the 2014 International Conference on Advanced Technologies for Communications (ATC 2014), Hanoi, Vietnam, 15–17 October 2014; pp. 11–14. [Google Scholar] [CrossRef]
  20. Rahman, M.S.; Haque, M.M.; Kim, K.D. High precision indoor positioning using lighting LED and image sensor. In Proceedings of the 2011 14th International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, 22–24 December 2011; pp. 309–314. [Google Scholar] [CrossRef]
  21. Panta, K.; Armstrong, J. Indoor localisation using white LEDs. Electron. Lett. 2012, 48, 228–230. [Google Scholar] [CrossRef]
  22. Gustafsson, F. Particle filter theory and practice with positioning applications. IEEE Aerosp. Electron. Syst. Mag. 2010, 25, 53–82. [Google Scholar] [CrossRef] [Green Version]
  23. Davidson, P.; Collin, J.; Takala, J. Application of particle filters for indoor positioning using floor plans. In Proceedings of the Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS), Kirkkonummi, Finland, 14–15 October 2010; pp. 1–4. [Google Scholar] [CrossRef]
  24. Li, F.; Zhao, C.; Ding, G.; Gong, J.; Liu, C.; Zhao, F. A Reliable and Accurate Indoor Localization Method Using Phone Inertial Sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; ACM: New York, NY, USA, 2012; pp. 421–430. [Google Scholar] [CrossRef]
  25. Jung, S.Y.; Choi, C.K.; Heo, S.H.; Lee, S.R.; Park, C.S. Received signal strength ratio based optical wireless indoor localization using light emitting diodes for illumination. In Proceedings of the 2013 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 11–14 January 2013; pp. 63–64. [Google Scholar] [CrossRef]
  26. Zhang, W.; Kavehrad, M. A 2-D indoor localization system based on visible light LED. In Proceedings of the 2012 IEEE Photonics Society Summer Topical Meeting Series, Seattle, WA, USA, 9–11 July 2012; pp. 80–81. [Google Scholar] [CrossRef]
  27. Kuo, Y.S.; Pannuto, P.; Hsiao, K.J.; Dutta, P. Luxapose: Indoor Positioning with Mobile Phones and Visible Light. In Proceedings of the Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, Maui, HI, USA, 7–11 September 2014; ACM: New York, NY, USA, 2014; pp. 447–458. [Google Scholar] [CrossRef] [Green Version]
  28. Wang, T.; Zhao, H.; Shen, Y. An Efficient Single-Anchor Localization Method Using Ultra-Wide Bandwidth Systems. Appl. Sci. 2020, 10, 57. [Google Scholar] [CrossRef] [Green Version]
  29. Li, L.; Hu, P.; Peng, C.; Shen, G.; Zhao, F. Epsilon: A Visible Light Based Positioning System. In Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14), Seattle, WA, USA, 2–4 April 2014; USENIX Association: Seattle, WA, USA, 2014; pp. 331–343. [Google Scholar]
  30. Zhang, C.; Zhang, X. LiTell: Robust indoor localization using unmodified light fixtures. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking, New York, NY, USA, 3–7 October 2016; pp. 230–242. [Google Scholar]
  31. Zhu, S.; Zhang, X. Enabling High-Precision Visible Light Localization in Today’s Buildings. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, Niagara Falls, NY, USA, 19–23 June 2017; pp. 96–108. [Google Scholar]
  32. Zhang, C.; Zhang, X. Pulsar: Towards ubiquitous visible light localization. In Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking, Snowbird, UT, USA, 16–20 October 2017; pp. 208–221. [Google Scholar]
  33. Rajagopal, N.; Lazik, P.; Rowe, A. Visual Light Landmarks for Mobile Devices. In Proceedings of the 13th International Symposium on Information Processing in Sensor Networks, Berlin, Germany, 15–17 April 2014; IEEE Press: Piscataway, NJ, USA, 2014; pp. 249–260. [Google Scholar]
  34. Lee, Y.U.; Kavehrad, M. Two hybrid positioning system design techniques with lighting LEDs and ad-hoc wireless network. IEEE Trans. Consum. Electron. 2012, 58, 1176–1184. [Google Scholar] [CrossRef]
  35. Ravi, N.; Iftode, L. Fiatlux: Fingerprinting Rooms Using Light Intensity; NA Publishing: Ann Arbor, MI, USA, 2007. [Google Scholar]
  36. Hamidi-Rad, S.; Lyons, K.; Goela, N. Infrastructure-less indoor localization using light fingerprints. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 5995–5999. [Google Scholar] [CrossRef]
  37. Azizyan, M.; Constandache, I.; Roy Choudhury, R. SurroundSense: Mobile phone localization via ambience fingerprinting. In Proceedings of the 15th Annual International Conference on Mobile Computing and Networking, Beijing, China, 20–25 September 2009; pp. 261–272. [Google Scholar]
  38. Li, L.; Xie, P.; Wang, J. RainbowLight: Towards Low Cost Ambient Light Positioning with Mobile Phones. In Proceedings of the 24th Annual International Conference on Mobile Computing and Networking, New Delhi, India, 29 October–2 November 2018; pp. 445–457. [Google Scholar]
  39. Mazilu, S.; Blanke, U.; Calatroni, A.; Tröster, G. Low-power ambient sensing in smartphones for continuous semantic localization. In Artificial Intelligence and Lecture Notes in Bioinformatics; Springer: Berlin, Germany, 2013; Volume 8309, pp. 166–181. [Google Scholar]
  40. Holmes, A.; Desai, S.; Nahapetian, A. Luxleak: Capturing computing activity using smart device ambient light sensors. In Proceedings of the 2nd Workshop on Experiences in the Design and Implementation of Smart Objects, New York, NY, USA, 3–7 October 2016; pp. 47–52. [Google Scholar]
  41. Zhang, W.; Chowdhury, M.I.S.; Kavehrad, M. Asynchronous indoor positioning system based on visible light communications. Opt. Eng. 2014, 53, 045105. [Google Scholar] [CrossRef]
  42. Feng, Y.; Chang, C.K.; Chang, H. An ADL Recognition System on Smart Phone. In Proceedings of the 14th International Conference on Inclusive Smart Cities and Digital Health (ICOST 2016), Wuhan, China, 25–27 May 2016; Springer: New York, NY, USA, 2016; Volume 9677, pp. 148–158. [Google Scholar]
  43. Feng, Y.; Chang, C.K.; Ming, H. Recognizing Activities of Daily Living to Improve Well-Being. IT Prof. 2017, 19, 31–37. [Google Scholar] [CrossRef]
  44. Goral, C.M.; Torrance, K.E.; Greenberg, D.P.; Battaile, B. Modeling the Interaction of Light Between Diffuse Surfaces. SIGGRAPH Comput. Graph. 1984, 18, 213–222. [Google Scholar] [CrossRef]
  45. Hall, M.; Frank, E.; Holmes, G.; Pfahringer, B.; Reutemann, P.; Witten, I.H. The WEKA data mining software: An update. SIGKDD Explor. Newsl 2009, 11, 10–18. [Google Scholar] [CrossRef]
  46. Wu, J.; Feng, Y. Global Wi-Fi Positioning Method Based on Online Clustering Algorithm. In Proceedings of the 2018 4th International Conference on Big Data Computing and Communications (BIGCOM), Chicago, IL, USA, 7–9 August 2018; pp. 22–27. [Google Scholar] [CrossRef]
  47. Wu, J.; Feng, Y.; Sun, P. Sensor Fusion for Recognition of Activities of Daily Living. Sensors 2018, 18, 4029. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Hu, Y.; Xiong, Y.; Huang, W.; Li, X.; Zhang, Y.; Mao, X.; Yang, P.; Wang, C. Lightitude: Indoor Positioning Using Ubiquitous Visible Lights and COTS Devices. In Proceedings of the International Conference on Distributed Computing Systems, Columbus, OH, USA, 29 June–2 July 2015; pp. 732–733. [Google Scholar]
Figure 1. Form factor geometry.
Figure 1. Form factor geometry.
Electronics 11 02503 g001
Figure 2. The device coordinate system and the world’s coordinate system.
Figure 2. The device coordinate system and the world’s coordinate system.
Electronics 11 02503 g002
Figure 3. A scenario with a set of luminaires and a mobile terminal.
Figure 3. A scenario with a set of luminaires and a mobile terminal.
Electronics 11 02503 g003
Figure 4. The system architecture of LiLo.
Figure 4. The system architecture of LiLo.
Electronics 11 02503 g004
Figure 5. The luminance field map of a bedroom derived from the Radiosity method.
Figure 5. The luminance field map of a bedroom derived from the Radiosity method.
Electronics 11 02503 g005
Figure 6. Real-life data captured. The squares with different colors are the testing points with different illumination.
Figure 6. Real-life data captured. The squares with different colors are the testing points with different illumination.
Electronics 11 02503 g006
Figure 7. Floor layout and luminaire placement in one apartment. The numbered positions in green circles are KAAs, and the hollow squares are the luminaire with ID numbers, with different colors indicating the different luminous intensities.
Figure 7. Floor layout and luminaire placement in one apartment. The numbered positions in green circles are KAAs, and the hollow squares are the luminaire with ID numbers, with different colors indicating the different luminous intensities.
Electronics 11 02503 g007
Figure 8. Recognition comparison using different attributes. “g” represents the geomagnetic information; “o” represents the orientation information.
Figure 8. Recognition comparison using different attributes. “g” represents the geomagnetic information; “o” represents the orientation information.
Electronics 11 02503 g008
Figure 9. Confusion matrix for 21 scenes classified using Bayesian Network.
Figure 9. Confusion matrix for 21 scenes classified using Bayesian Network.
Electronics 11 02503 g009
Figure 10. Comparison of recognition rates using different combinations of attributes. In the xlabel, “angle” represents that the orientation azimuth angle is leveraged; “correct location” mean that Wi-Fi-based localization outcomes with high-accuracy are imported here; “noisy location” means that wrong pre-recognized locations are imported deliberately for the purpose of comparison.
Figure 10. Comparison of recognition rates using different combinations of attributes. In the xlabel, “angle” represents that the orientation azimuth angle is leveraged; “correct location” mean that Wi-Fi-based localization outcomes with high-accuracy are imported here; “noisy location” means that wrong pre-recognized locations are imported deliberately for the purpose of comparison.
Electronics 11 02503 g010
Figure 11. Recognition comparison in bedroom and living room under different kind of classifiers. “BayesNet” represents using a Bayesian network classifier and “J48” represents using a decision tree classifier.
Figure 11. Recognition comparison in bedroom and living room under different kind of classifiers. “BayesNet” represents using a Bayesian network classifier and “J48” represents using a decision tree classifier.
Electronics 11 02503 g011
Table 1. Photometry of a luminaire.
Table 1. Photometry of a luminaire.
ID.CategoryLocationHeight (m)Power (W) × Quantity
l1ceilingLightentrance hall2.440 × 1
l2fluorescentabove the faucet1.6513 × 2
l3fluorescentkitchen ceiling2.454 × 2
l4ceilingLightliving room2.413 × 3
l5underClosetabove the stoves1.2313 × 2
l6bathLightabove the vanity table2.150 × 2
l7ceilingLighthallway2.440 × 1
l8tableLampon the desk1.040 × 1
l9ceilingLightbedroom2.440 × 1
l0floorLampon the desk1.340 × 1
All the luminaires are spot light sources, because the lamp shade channels the direction of the emitting light. l2 and l3 are fluorescent, the others are incandescent.
Table 2. List of the recorded ADL scenes, and number of recording for each.
Table 2. List of the recorded ADL scenes, and number of recording for each.
Main ContextSceneNo. of Recordings
Bathroom (910)Bathroombowel
Bathroomfaucet
Bathroomflushing
Bathroompee
405
273
24
208
Kitchen (765)Chop
Cooking
Washingdishes
32
506
227
DiningRoom (1293)Breakfast
Dinner
LivingRoom
Lunch
Midnightsnack
15
491
11
501
275
Bedroom (1230)Dinnerbedroom
Gettingup
Nap
Sleep
WorkingonPCathome
57
31
41
122
1010
Public places (159)Amespubliclibrary
PCoffice
Gilman1353
RossHall
103
11
39
6
Total 4388
Table 3. Comparison of recognition accuracy to previous work.
Table 3. Comparison of recognition accuracy to previous work.
ReferenceModalitiesFace-UpMethodAverage Error
IDyLL [13]Accelerometer, Gyro, Compass, IMU, PDR, others conventional luminaries PD on smartphones office buildings, WDYesIllumination peak detection0.38∼0.5 m
M. Yoshino etc. [14]Image sensor 3D1.5 m
C. Sertthin etc. [15]LED, PD 2D1∼2 m
X. Liu etc. [16]Fluorescent lamp, PD 2D0.1∼0.3 m
Luxapose [27]LED ID with DC, camera, smart phoneYesAOA0.1 m testbed-level
Epsilon [29]Modulated LED Beacon-based, RSS, custom light sensor, smart phone, user’s gesturesNoMB, DC0.4 m
Lightitude [48]RSS, IMU, PDR, smart phoneNoMB, weights on RSS set1.93 m
LiLoRSS, Accelerometer, Gyro, orientation, time in a dayNoFP, AoA0.4 m
DC: device configuration, Received Signal Strength (RSS), FP: fingerprinting-based, MB: modeling-based, inertial measurement units (IMU), pedestrian dead reckoning (PDR), PD: photodiode sensors, ML: maximum likelihood, WD: war-driving, model based: based on ray projection model, AOA: angle-of-arrival.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, J.; Feng, Y.; Chang, C.K. LiLo: ADL Localization with Conventional Luminaries and Ambient Light Sensor. Electronics 2022, 11, 2503. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11162503

AMA Style

Wu J, Feng Y, Chang CK. LiLo: ADL Localization with Conventional Luminaries and Ambient Light Sensor. Electronics. 2022; 11(16):2503. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11162503

Chicago/Turabian Style

Wu, Jiaxuan, Yunfei Feng, and Carl K. Chang. 2022. "LiLo: ADL Localization with Conventional Luminaries and Ambient Light Sensor" Electronics 11, no. 16: 2503. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11162503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop