 Research Article
 Open Access
 Published:
Occupancy distribution estimation for smart light delivery with perturbationmodulated light sensing
Journal of Solid State Lighting volume 1, Article number: 17 (2014)
Abstract
The advent of modern lightemitting diode (LED) techniques enables us to develop novel lighting systems with numerous previously unavailable features. Specifically, by using the fixtures for both illumination and to interrogate the space, sourcetosensor communication becomes possible at very low cost. In this paper, we present a novel framework to estimate the occupancy distribution in an indoor space using colorcontrollable LED fixtures (the same fixtures providing the illumination, simultaneously) and sparsely distributed nonimaging color sensors. By modulating randomly generated perturbation patterns onto the drive signals of the LED fixtures and measuring the changes in the color sensor responses, we are able to recover a light transport model for the room. Two approaches are proposed to estimate the spatial distribution of the occupancy, based on a light blockage model and a light reflection model, respectively. These two approaches, which can be combined, can faithfully reveal the occupancy scenario of the indoor space, while preserving the privacy of its occupants. An occupancysensitive lighting system can be designed based on this technique.
1Background
As we move from incandescent bulbs to fluorescent bulbs, and on to modern LED fixtures, lighting solutions are becoming more and more energy efficient. A new direction in lighting research is to develop smart lighting systems — lighting systems that can “think” and deliver the right light where and when it is needed. Most such lighting systems have a set of sensors to capture the occupancy information in the space. With knowledge of the room’s spatial occupancy distribution in (near) real time, a lighting system can adjust the spatial and spectral distribution to reduce energy consumption and enhance human comfort, wellbeing, and productivity.
In smart lighting systems, the sensors being used can be generally divided into two categories: imaging sensors and nonimaging sensors. The computer vision community usually employs imaging sensors such as cameras and depth sensors to capture images, videos, and depth maps of the scene. An image, whether graylevel, RGB, or depth, has a 2D structure, which describes the spatial distribution of objects or people in the space. A great deal of highlevel information can be inferred from such data with computer vision and pattern recognition methods, which enable various applications such as object detection and tracking, event detection, and traffic surveillance.
However, in most lighting applications, humanreadable highresolution images are not only unnecessary, but also undesirable, as they present an information security concern. For example, when monitoring the occupancy of a room for the task of intelligent lighting control, we only need a very rough estimation about which part of the room is occupied. Using cameras will raise the concern of privacy — people just feel uncomfortable being monitored by a camera. If the security of the camera network is compromised, there is an even greater risk to privacy.
To ameliorate these concerns, nonimaging sensors offer a good alternative to cameras. In this paper, we propose using lowcost color sensors that are based on photodiodes and color filters. The output of a nonimaging color sensor is usually only a few numeric values measuring the local luminous flux of different colors, rather than a focused image. These sensors present no privacy concerns. However, due to the very limited information that can be obtained from such color sensors, it is very difficult to infer highlevel information from the sensor readings. Estimating the 2D or 3D occupancy distribution in the space from a limited number of 1D sensor outputs is an illposed and extremely challenging problem.
Fortunately, the emergence of modern LEDs unlocks a new direction for us. Modern LED fixtures are controllable over each color channel, and allow rapidly changing input to drive these channels. The changes in the light can be sensed by photodiodes, which makes sourcetosensor communication possible, giving birth to new techniques such as visible light communication (VLC), or sometimes called light fidelity (LiFi) [1]–[6]. The idea of using visible light for both illumination and communication at the same time with the same fixture is often referred to as “dualuse lighting”. In our work, we measure the color sensor output under different lighting conditions. With repeated measurements, we can construct a model to describe the spatial transport of the light. Such a model captures rich information about the 3D space, and can be used to roughly estimate the occupancy distribution. With the estimated occupancy distribution, we can produce the lighting condition that best suits this occupancy scenario, such that we can improve energy efficiency, enhance human comfort, wellbeing, and productivity, and even elongate the lifespan of the LEDs and delay fixture replacement.
The remainder of this paper is organized as follows: In Section “Related work”, we review previous work related to our proposed technique. In Section “Testbed setup”, we introduce our testbed for experiments. In Section “Recovering the light transport matrix”, we describe how we solve for the light transport matrix in a lighting system. In Section “Perturbationmodulated lighting”, we introduce the perturbationmodulated lighting method, which is necessary for light transport sensing. Section “3D scene reconstruction with light blockage model” and Section “Floorplane occupancy mapping with light reflection model” introduce the two approaches that we use for occupancy distribution estimation, based on wallmounted sensors and ceilingmounted sensors, respectively. Section “Results” reports the experimental results. Discussions are provided in Section “Discussions”, and Section “Conclusion” is the final conclusion.
2Related work
2.1 Occupancybased lighting
A number of smart lighting systems have been designed to adjust the lighting condition according to the occupancy in the space, and there are various options for the occupancy sensor, from imaging sensors to nonimaging sensors. For example, in 1987, Rea and Jaekel used video systems, infrared, ultrasonic, and electric eyes to assess energy efficiency in lighting a staff room (6.0×8.8 m) [7]. In 1992, an imaging lighting control system called ImCon was proposed, which used a chargecoupled device (CCD) camera to monitor the occupancy in a test room (5.6×5.6 m) and control four fluorescent fixtures [8]. In 2009, Delaney et al. proposed using a network of passive infrared (PIR) sensors and light sensors to evaluate energy efficiency in lighting systems [9]. In 2010, Agarwal et al. proposed a smart building automation solution using a combination of PIR sensors and magnetic reed switch door sensors [10]. Recently, Aldrich et al. developed a lighting control application using networks of PIR sensors [11]. In 2010, Caicedo et al. looked into the problem of how to optimize the dimming levels of LED fixtures based on localized occupancy information [12]. A review paper by Guo et al. has comprehensively discussed different sensors that have been used in occupancybased lighting control systems, including PIR sensors, ultrasonic sensors, audible sound sensors, microwave sensors, light barriers, video cameras, biometric systems, and pressure sensors [13]. Another review paper by Hassan et al. also discussed several occupancy detection techniques for lighting control applications, including PIR sensors, ultrasonic sensors, radio frequency identification (RFID), and cameras [14].
However, to the best of our knowledge, no prior work exists using nonimaging color sensors that are based on photodiodes and color filters, together with modulated illumination from the existing fixtures simultaneously providing light for the space, to implement occupancysensitive lighting control systems. Such color sensors can be built at very low cost. And since one color sensor only outputs a few numeric values, there is no privacy concern of using color sensors. We provide a comparison between nonimaging color sensors and imaging sensors such as webcams or Kinect in Figure 1. Comparisons between PIR, ultrasonic, microwave, video, and several other sensors can be found in [13].
The major difference between color sensors and other nonimaging occupancy sensors is that color sensors use visible light that is delivered by the fixtures, while PIR sensors use infrared, and ultrasonic sensors and audible sound sensors use sound waves. Also, a PIR sensor detects the infrared radiation emitted from an object, so it works well for detecting people or animals. The gradient of the change in the infrared field can be used to detect motions of people or animals, thus enabling applications such as burglar alarms and automaticallyactivated lighting systems. The colorsensorbased occupancy sensing technique proposed in this paper detects the change in the visible light field, which is often caused by blockage of light paths, or by changing reflection surfaces as people (etc.) move around the space. Thus, any object that affects the visible light field could be detected, rather than only objects that emit infrared radiation. Ultrasound sensors are active devices that emit ultrasonic sound waves and use the time interval between emitting and echoing to calculate the distance to objects. In contrast, color sensors cannot measure the distance, and they are passive sensors, although we actively add perturbations to the light from the existing fixtures. Ultrasonic sensors often suffer from false alarms while PIR sensors have more misses [13]. Audible sound sensors are seldom used in smart lighting systems because they are illsuited to the problem. Environmental noise can cause a very high false alarm rate, and quiet occupants can cause a very high miss rate.
2.2 Light transport model
In many multisource multisensor systems such as sonar, ultrasound, and scanning electron microscopes, the process of the system follows an affine relationship:
where vector x is the input signals to all sources, and vector y is the measurements from all sensors. The matrix A can be understood as the coefficients of the process, and vector b is the systematic bias.
Specifically, the computer graphics community is very interested in visible light source and camera sensor systems, where it is often assumed that b=0, and the matrix A is often referred to as the light transport matrix. The light transport matrix is an effective tool for relighting realworld scenes (illuminate the scene with a virtual pattern as a postprocess) [15]–[19], and can also be used to interchange the lights and cameras in a scene [20],[21], or be used for radiometric compensation [22].
In our smart lighting system, the vector x is the input to all LED fixtures, and the vector y is the measurements from all nonimaging color sensors. The lighting system may appear to be similar to a structured light system on first glance. However, we point out that there are several significant differences between the smart lighting problem as posed here and the structured light technique, which is often used to build models for computer graphics. First, in structured light, the source is usually a focused, highresolution projector, projecting specific (sometimes complicated, but precise) structured light patterns onto the scene, and the sensor is usually a highresolution camera [16],[17],[20]. However, in a smart lighting system, we usually only have a few fixtures and a few sensors due to the cost of hardware and installation. And the light itself is not structured at all, beyond the ordinary placement of fixtures in, for example, the ceiling. Thus, the light transport matrix A as used in, for example, computer graphics is usually very large, while in the smart lighting problem it is much smaller, and contains far less information about the space.
Second, in structured light, different pixels of the projector usually illuminate different nonoverlapping regions of the scene. In contrast, in a smart lighting system, any fixture could conceivably illuminate the entire space, although different fixtures installed at different locations will have different luminous intensity distributions over the space. Besides, in structured light, different pixels in the image captured by the camera correspond to different nonoverlapping regions of the scene. But a color sensor receives luminous flux from a very wide field of view, although with a spatial distribution function.
Further, unlike in computer graphics where people usually assume b=0, in a smart lighting system, the vector b is usually nonzero because it represents the sensor response to ambient light, such as sunlight or other external (uncontrolled) light sources.
2.3 3D reconstruction
Based on the estimated light transport matrix A, in this paper we propose two approaches to estimate the occupancy distribution: the light blockage model (Section “3D scene reconstruction with light blockage model”) and the light reflection model (Section “Floorplane occupancy mapping with light reflection model”). The light blockage model is based on wallmounted sensors, and results in 3D volumes. The light reflection model is based on ceilingmounted sensors, and results in 2D maps (projections onto the floor plane).
The first approach, 3D scene reconstruction with a light blockage model, is closely related to existing work in the medical imaging, computer vision, robotics, and wireless sensor network literature. In medical imaging, techniques for 3D volume data reconstruction from projections include Fourier Slice Theorem based methods [23],[24], Algebraic Reconstruction Techniques (ART) [25], statistical methods [26], and total variation based methods [27]. In computer vision, people are interested in estimating the visual hull of a 3D object using 2D images [28],[29]. In robotics, an interesting problem is obstacle/object mapping — computing a spatial map to represent the obstacles or objects in the environment [30]. In wireless sensor networks, a related technique is Radio Tomographic Imaging (RTI), which uses the attenuation in received signal strength (RSS) caused by physical objects to create an image [31].
Reconstructing the 3D scene in a fixturesensor smart lighting system is a very different problem from all the above mentioned work. In medical imaging (for example, computed tomography) multiple radiation sources (Xrays, for example) and sensors are typically rotated around the object to create numerous lines, and 3D images can be acquired slice by slice. In robotics, robots can move in the environment to sense at different locations. However, in a smart lighting system, all fixtures and sensors are firmly installed in the room and should not be moved during operation. Besides, the number of sensors is usually very small, unlike the visual hull problem in computer vision [28],[29], where an image has many pixels. Further, as we have discussed, any fixture illuminates the entire space, albeit nonuniformly, and any sensor receives light from a wide field of view. Thus, the spatial information that is contained in the small light transport matrix in our problem is very limited. The 3D reconstruction from such little information is extremely illposed. We should expect very rough lowresolution reconstruction results in our problem. However, that’s all we need. Since our goal is to control the lighting condition in the room, rough reconstruction suffices for this task.
3Methods
3.1 Testbed setup
3.1.1 The smart space testbed
To implement and validate our ideas, we have established a Smart Space Testbed (SST). This room has one window and two doors, and is 85.5 inches wide, 135.0 inches long, and 86.4 inches high (Figure 2a). This testbed is equipped with twelve colorcontrollable LED fixtures mounted in the ceiling (Figure 2c). For each fixture, we can independently specify the intensity of three color channels: red, green, and blue. The input to each channel is scaled to lie in the range [ 0,1]. We use twelve Colorbug wireless optical light sensors by SeaChanger (Figure 2b) as the color sensors in these experiments. These sensors can be installed either on the walls (Figure 2d) or on the ceiling. The key component of this sensor is an array of colorfiltered photodiodes. Each color sensor has four output channels: red, green, blue and white (unfiltered). We use the Robot Raconteur software [32] for communication: The software connects to the color sensors with WiFi, and sends input signals to the fixtures via Bluetooth. This same testbed has been used for a number of other investigations, including lighting control algorithms [33]–[36] and visual tracking systems [37].
3.1.2 The occupancysensitive lighting system
The final goal of our system is to achieve occupancysensitive smart lighting. In other words, when the occupancy distribution in the room changes, the system should produce the lighting condition that best suits this occupancy scenario to maximize comfort, wellbeing, and productivity, and minimize energy consumption. In most cases, by “occupancy distribution”, we mean the number and spatial locations of people in the room. For this purpose, there should be a control strategy module and an occupancy sensing module, and they work in two alternating stages: the sensing stage and the adjustment stage (Figure 3). In the sensing stage, the occupancy sensing module collects the sensor readings to estimate the occupancy distribution; in the adjustment stage, the control strategy module decides what lighting condition should be produced based on the estimated occupancy distribution. The design of control strategies is beyond the scope of this paper. Here we focus on the occupancy sensing module.
3.1.3 Limitations of current testbed
The twelve LED fixtures in the smart space are 7^{′′} LED Downlight Round RGB (Vivia 7DR3RGB) products from Renaissance Lighting, and these fixtures exhibit approximately 0.3 seconds delay between the input signals being specified and the desired lighting condition being produced. The current Colorbug sensors are commercial products, which are easy to install, but they are expensive, slow, and not customizable. Each color measurement from the Colorbug sensors takes a few seconds. Thus, due to the very limited performance of our current fixtures and sensors, we are not able to fully implement a realtime occupancysensitive lighting system. However, we do emphasize that ultrafast LEDs and photodiodes have been used for visible light communication [38]–[41], and these LEDs and photodiodes can also be used for occupancy sensing. The experiments in this paper, using our current fixtures and sensors, suffice as proof of concept and validation of methods.
3.2 Recovering the light transport matrix
Since the current configuration of our testbed has twelve LED fixtures with three channels each, the input to the system is an m _{1}=36 dimensional signal x. Because we have twelve color sensors, each with four channels, the measurement is an m _{2}=48 dimensional signal y. We have performed experiments to confirm that the affine relationship in Eq. (1) holds for our fixturesensor system, where the matrix A is called the light transport matrix, and the vector b is the sensor response to the ambient light. If the affine relationship does not hold for certain fixtures or sensors, we can usually calibrate the fixtures or sensors to linearize the responses and make Eq. (1) hold.
The light transport matrix A is a very good signature of the occupancy distribution in the space, since it is independent of the fixture input or the ambient light. Matrix A is only dependent on the light transport of the scene, such as diffuse reflection, specular reflection, interreflection, and refraction [22]. Thus, by analyzing matrix A, we can extract spatial information about the scene.
3.2.1 Light transport in projectorcamera systems
Efficient acquisition methods of the light transport matrix A have been extensively studied by the computer graphics community. This is because due to the high dimensionality of vector x and vector y, the light transport matrix A is usually very large in a projectorcamera system. Thus the process of taking sufficient photos to recover A would be very slow. Efficient light transport sensing methods based on compressed sensing techniques have been studied by Sen et al.[21] and Peers et al.[18]. Wang et al. proposed a kernel Nyström method to efficiently reconstruct a low rank approximation of matrix A. O’Toole et al. presented a low rank approximation solution using an optical implementation of Arnoldi iteration: project the photo captured by the camera to the scene iteratively [19].
3.2.2 Light transport in fixturesensor systems
The efficient methods mentioned above are interesting. However, a smart lighting system is very different from a projectorcamera system. We cannot apply an arbitrary lighting condition onto the space to acquire light transport information: a smart lighting system is built for a space where people live and work, and we must ensure their comfort. The good news is that, we can still change the lighting condition, but with very small changes that are imperceptible to the room’s occupants. Also, since a smart lighting system only has a few fixtures and a few sensors, the light transport matrix A is much smaller than in a projectorcamera system. Since modern LEDs and photodiodes can operate very fast (so fast that they can be used for communication at megabit per second [38] or even gigabit per second [39]–[41] data rates), sufficient measurements can be acquired within a very short time period, during which we can assume both the occupancy distribution and the ambient light conditions are unchanged. We refer to this as the quasistatic assumption.
Eliminating b.
To eliminate the ambient light response from Eq. (1), we proceed as follows. We first set the LED input to a reference level x _{0}, and the output of the sensors is
Now if we add a small perturbation δ x to the input, the new output becomes
By simple subtraction, we can eliminate b, and get
In our smart lighting system, we call x _{0} the base light, which is determined by the control strategy module. We call δ x a perturbation, which will be discussed in Section “Perturbationmodulated lighting”. Depending on the desired lighting conditions and possible changes in the room occupancy, x _{0} may be adjusted over time — but not during sensing.
Solve for A.
If we can apply different perturbations to the fixtures very fast, and also read the sensor readings very fast, we can make many measurements within a very short time period, during which we can assume both matrix A and vector b do not change. Thus, if we measure y _{0} once, and measure y _{0}+δ y multiple times with different δ x, we get a linear system to solve for A. In other words, we perturb the input to the LED fixtures x _{0} with different m _{1}dimensional signals δ x _{1},δ x _{2},…,δ x _{ n }, and measure the m _{2}dimensional changes of the sensor readings δ y _{1},δ y _{2},…,δ y _{ n }. Let X=[δ x _{1},δ x _{2},…,δ x _{ n }] and Y=[δ y _{1},δ y _{2},…,δ y _{ n }], where $X\in {\mathbb{R}}^{{m}_{1}\times n}$ and $Y\in {\mathbb{R}}^{{m}_{2}\times n}$. Now the problem becomes a linear system Y=A X, which is very similar to the light transport problem in computer graphics.
With modern LEDs and rapidresponse color sensors, we can usually make many measurements in a short time period to ensure n>m _{1}. Thus this overdetermined linear system can be simply solved by the MoorePenrose pseudoinverse:
which corresponds to minimizing the Frobenius norm of the error:
If under some circumstances n is smaller than m _{1}, then Y=A X is an underdetermined system. Then other methods such as recursive least squares (RLS) [35],[42], low rank approximation, or sparse approximation [43] can be used. In our problem, we can always make enough measurements to ensure n>m _{1} and use the simple pseudoinverse method.
3.3 Perturbationmodulated lighting
3.3.1 Perturbation modulation
As introduced in Section “The occupancysensitive lighting system”, the smart lighting system works with two alternating stages: sensing and adjustment. During the sensing stage, perturbations δ x are added to the base light x _{0}, and δ y is measured. Then in the adjustment stage, matrix A is computed, the occupancy distribution is estimated, and the control strategy module gradually changes the base light to a new one (if necessary), which is determined according to the estimated occupancy distribution. In such a system, the base light changes slowly over a large range, while the perturbation changes quickly, and ideally imperceptibly, within a small range (Figure 4).
3.3.2 Requirements for perturbation patterns
To accurately recover the light transport matrix while also ensuring the comfort of the occupants of the space, we specify three requirements on the perturbation patterns:

1.
The perturbation patterns must be rich enough in variation to capture sufficient information from the scene.

2.
The magnitude of the perturbation must be small enough not to bother humans in the space.

3.
The magnitude of the perturbation must be large enough to be accurately measured by the color sensors.
To meet the first requirement, randomly generated patterns usually suffice [44]. If we define the magnitude of the perturbation patterns as the maximum deviation from the base light $\rho =\underset{i}{max}\left\right\delta {x}_{i}{}_{\infty}$, then the choice of ρ is a tradeoff. We have performed sensitivity analyses, and listed some of the results in Figures 5 and 6. To study the sensor sensitivity, we add a sinusoid of a specific magnitude to one LED on one color channel, and we record the response of one sensor in the same color channel. Figure 5 shows the results of using the green channel. As we can see, based on a range of [ 0,1], when ρ is as small as 0.01, the sensor response is noticeably distorted; and as ρ gets larger, the sensor response becomes wellbehaved (more linear). In Figure 6, we show four images of the room taken by a camera at different times during the perturbation interval for each ρ. We have observed that when ρ is large, the change of lighting can be very annoying^{a}. In our work, we set ρ=0.025 such that perturbations are not easily noticed, but can be accurately sensed by our current color sensors. Improved sensors will allow a larger range of acceptable ρ values.
3.3.3 Perturbation ordering
Now assume that we have randomly generated n perturbation patterns δ x _{1},δ x _{2},…,δ x _{ n } with magnitude ρ. In the sensing stage, we apply these patterns to measure the changes in sensor output, and recover the light transport matrix A. Here one question arises: In what order should we arrange these perturbation patterns to maximize human comfort?
Studies on human visual systems have found thresholds needed to see flicker of different frequencies [45],[46]. Intuitively, we would say that we wish the light to change gradually, thus less noticeably. For gradual changes, we wish the neighboring perturbation patterns to be as similar as possible. Let (i _{1},i _{2},…,i _{ n }) be a permutation of (1,2,…,n). Then $\left(\delta {x}_{{i}_{1}},\delta {x}_{{i}_{2}},\dots ,\delta {x}_{{i}_{n}}\right)$ is a reordering of the patterns (δ x _{1},δ x _{2},…,δ x _{ n }). We naturally come to the following optimization problem:
where · is a chosen vector norm, usually the ℓ _{2} norm.
The optimization problem in Eq. (7) has a very straightforward graphtheoretical interpretation. We create a weighted complete undirected graph G with n+1 vertices, where each perturbation pattern δ x _{ i } is a vertex, plus one vertex corresponding to the base light. The weight of an edge between two vertices is just the norm of the difference between the two corresponding perturbation patterns, where the perturbation pattern corresponding to the base light is all zeros. Finding the solution to problem Eq. (7) is equivalent to finding the shortest Hamiltonian cycle of G, or solving the famous NPhard Travelling Salesman Problem (TSP) that has been intensely studied [47]. Thus, any existing TSP algorithm (e.g.[48]–[51]) can be used to solve Eq. (7). In our work, we use a very simple genetic algorithm [52], where the mutation of a genome (a Hamiltonian cycle) is simply crosslinking two randomly selected nonincident edges, as shown in Figure 7.
3.4 3D scene reconstruction with light blockage model
In Section “Light transport in fixturesensor systems”, we discussed how to obtain the light transport matrix in a fixturesensor system. In this section, we introduce the first approach for estimating the occupancy distribution using the light transport matrix A. This approach requires the color sensors to be installed on the walls of the room. In Section “Floorplane occupancy mapping with light reflection model” we will introduce a second approach, which can use ceilingmounted color sensors.
3.4.1 Light blockage model
Let the light transport matrix of an empty room ^{b} be A _{0}. At run time, the light transport matrix is A, and we call E=A _{0}−A the difference matrix. Matrix E is also m _{2}×m _{1}, and each entry of E corresponds to one fixture channel and one sensor channel. If one entry of matrix E has a large positive value, it means that the total flux is significantly attenuated, that is, many of the light paths from the corresponding fixture to the corresponding sensor are very likely blocked. With all sensors mounted on the walls, from any given fixture to any given sensor, there are numerous diffuse reflection paths and one direct path, which is the line segment connecting the fixture and the sensor (Figure 8a). Obviously, the direct path is the dominating path, if one exists. Thus, a large entry of E may most likely imply the corresponding direct path has been blocked due to the change of occupancy distribution.
3.4.2 Aggregation of E
Though each entry of E corresponds to one direct path, the opposite is not true, since each LED fixture or sensor has multiple channels. Assume the number of LED fixtures is N _{ L }, and the number of sensors is N _{ S }. We aggregate the m _{1}×m _{2} matrix E to an N _{ S }×N _{ L } matrix $\xca$, such that the mapping from the entries of $\xca$ to all direct paths is a bijection. In our experiments, m _{1}=3N _{ L }=36 and m _{2}=4N _{ S }=48. The aggregation is performed on each fixturesensor pair as a weighted summation over three color channels: red, green, and blue. This can be formulated as:
where the weights w _{R}, w _{G}, and w _{B} can be used to compensate for the different sensitivities of the sensors on different color channels.
3.4.3 Reconstruction algorithm
After aggregation, if $\xca$ has a large entry at (i,j), then we believe the direct path from fixture j to sensor i is very likely blocked, though we are still not sure where the blockage happens along this path. Making the reasonable assumption that any occupants will have large cross sections relative to the thickness of a light path, any position that is close to this direct path is also likely being occupied. If two or more such direct paths intersect or approximately intersect in the 3D space, then it is most likely that the blockage happens at their intersection, as shown in Figure 8b.
Based on this assumption, we now describe our 3D reconstruction algorithm. For any point in the 3D space, we estimate the confidence that this point is being occupied. Let P be an arbitrary point in the 3D space, and d _{ i,j }(P) be the pointtoline distance from point P to the direct path from fixture j to sensor i. The confidence of point P being occupied is C(P), which is computed by:
where G(·,·) is the Gaussian kernel:
The denominator in Eq. (9) is a normalization term for the nonuniform spatial distribution of the LED fixtures and the sensors. The parameter σ is a measure of the continuity and smoothness of the occupancy, and should be related to the physical size of the occupants we expect. For simplicity, we assume σ is isotropic. If we discretize the 3D space and evaluate Eq. (9) at every position P(x,y,z), we can render a 3D volume V(x,y,z)=C(P(x,y,z)) of the scene, which can then be visualized.
3.4.4 Connection with Radon transform
Our 3D reconstruction method is partially inspired by the wellknown Radon transform, or more precisely, the inverse Radon transform, which has been successfully applied to the reconstruction of computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission computer tomography (SPECT), and even radar astronomy [53],[54]. Given a continuous function f(x,y) on ${\mathbb{R}}^{2}$, its Radon transform is a function defined on each straight line L={(x(t),y(t))} in ${\mathbb{R}}^{2}$:
Since a straight line can be uniquely defined by two parameters, is also a function on ${\mathbb{R}}^{2}$. The original function f can be reconstructed by the inverse Radon transform, which comprises a ramp filter and a backprojection. An example is shown in Figure 9. In our reconstruction algorithm Eq. (9), the denominator corresponds to the ramp filter, and the summation over all direct light paths corresponds to the backprojection.
As discussed in Section “3D reconstruction”, unlike a tomography problem where the sampled lines are very dense, in our smart lighting problem, with only twelve LED fixtures and twelve sensors that are fixed during the measurements, the direct light paths are very sparse (Figure 10), which makes reconstruction much more challenging than other problems that could be solved by a standard Radon transform. Thus, we can only expect very rough reconstruction results (but that’s all we need or want), and a simple algorithm like Eq. (9) should suffice.
3.5 Floorplane occupancy mapping with light reflection model
The 3D scene reconstruction approach introduced in Section “3D scene reconstruction with light blockage model” is based on a light blockage model, thus requiring that all sensors be installed such that direct light paths exist for all fixturesensor pairs, and are easily blocked by occupants. Practically speaking, this means the sensors are mounted on the walls. If the sensors are installed on the ceiling, there will be no direct light path from fixture to sensor. In this case, we only have reflection paths. With all sensors and fixtures in the same plane, we no longer have any spatial information about the zaxis direction (see Figure 2a for the spatial coordinate system of the testbed). In this section, we introduce our second occupancy distribution estimation approach, which models the light transport for ceilingmounted sensors using geometrical optics and photometry analysis.
3.5.1 Photometry for fixtures and sensors
Before we describe our light reflection model, we need to revisit our fixtures and sensors. What physical quantities should we use to describe the fixture input and the sensor output? In photometry, we use luminous intensity to measure the power emitted by a light source in a particular direction per unit solid angle. A numeric value read from a sensor is luminous flux, which measures the perceived power of incident light.
For a light fixture, the luminous intensity is nonisotropic. For example, the polar luminous intensity distribution graph of our Vivia 7DR3RGB fixture is provided in Figure 11. Let the luminous intensity in the normal direction be I _{max}. Then in the direction with angle θ to the normal direction, the luminous intensity can be written as I _{max}·q(θ).
3.5.2 Light reflection model
With the color sensors installed on the ceiling, what does a large entry in the aggregated difference matrix $\xca$ (see Section “Aggregation of E”) signify? It still means that the light paths from the corresponding fixture to the corresponding sensor are affected. Though these light paths are all diffuse reflection paths, we can still have a rough estimation of areas in the room that are more likely being occupied than other regions. For this purpose, we consider a very small area d s _{1} on the floor plane and one fixturesensor pair. As shown in Figure 12, the fixtures are illuminating the room downward, and the color sensors are “looking” downward. We assume that the sensing area of the color sensor is d s _{2}, the angle of the light path from the fixture to d s _{1} is θ _{1}, the angle of the light path from d s _{1} to d s _{2} is θ _{2}, the distance from the fixture to d s _{1} is D _{1}, and the distance from d s _{1} to d s _{2} is D _{2}. We also assume that d s _{1} is an ideal matte Lambertian surface with albedo α.
First, we consider the luminous flux arriving at d s _{1} from the fixture. The luminous intensity along the light path from the fixture to d s _{1} is I _{max}·q(θ _{1}), and the solid angle^{c} is $\frac{d{s}_{1}cos{\theta}_{1}}{4\pi {D}_{1}^{2}}$. Thus the luminous flux arriving at d s _{1} is the product of the luminous intensity and the solid angle:
Since the albedo of d s _{1} is α, the luminous intensity of the reflected light from d s _{1} in the normal direction is proportional to α Φ _{1}. For simplicity, we just use α Φ _{1} to denote the luminous intensity in the normal direction. Since d s _{1} is a Lambertian surface, the luminance of the surface is isotropic, and the luminous intensity obeys Lambert’s cosine law. Thus the luminous intensity of the reflected light along the light path from d s _{1} to d s _{2} is α Φ _{1} cosθ _{2}. The solid angle from d s _{1} to d s _{2} is $\frac{d{s}_{2}cos{\theta}_{2}}{4\pi {D}_{2}^{2}}$. Thus finally, the luminous flux arriving at d s _{2} from the fixture and reflected by d s _{1} is:
For all fixtures, I _{max} and the function q(·) are the same. For all sensors, d s _{2} is the same. For different positions on the floor plane, we assume the albedo α is constant, and use a d s _{1} of the same area. Then Φ _{2} is a function of the position (of d s _{1}) on the floor plane:
where $K=\alpha {I}_{\text{max}}\frac{d{s}_{1}d{s}_{2}}{16{\pi}^{2}}$ is a constant independent of position, and all fixturesensor pairs share the same K value. θ _{1}, θ _{2}, D _{1} and D _{2} are all dependent on the position.
3.5.3 2D confidence map
Intuitively, if there is a large entry in matrix $\xca$, then we can find the corresponding fixturesensor pair, and compute Φ _{2} at all positions on the floor using Eq. (14). Larger Φ _{2} values indicate regions more likely to be occupied.
Based on this intuition, we can precompute Φ _{2} at all positions for all fixturesensor pairs offline. We call the precomputed Φ _{2} at all positions the reflection kernel of the corresponding fixturesensor pair. Two reflection kernels are displayed in Figure 13 as examples.
Let the reflection kernel for fixture j and sensor i be R _{ i,j }. Then a 2D confidence map can be simply computed as a weighted sum of all these reflection kernels:
Unlike the light blockage model in Section “3D scene reconstruction with light blockage model”, where a 3D volume is reconstructed, here we can only estimate a 2D confidence map. In this confidence map, a pixel represents the confidence that the corresponding point on the 2D floor plane is being affected by occupants. It can be either affected by a person standing at that point, or affected by the shadow of a person.
We can also modify Eq. (15) to:
such that the parameter λ _{1}≥1 will encourage large entries in $\xca$ to sharpen the resulting confidence map, and the normalization parameter λ _{2}≥0 can ameliorate distortions in the resulting condifence map caused by the nonuniform spatial distribution of fixtures and sensors. Eq. (15) is a special case of Eq. (16), where λ _{1}=1 and λ _{2}=0.
4Results
4.1 3D reconstruction results with light blockage model
To validate the first approach introduced in Section “3D scene reconstruction with light blockage model”, we divide the smart room into six regions, and create nine occupancy scenarios by occupying one or two regions with people and furniture. We discretize the 3D space to voxels of size 1×1×1 inch ^{3}, and render 3D volumes of size 87×136×88. For the Gaussian kernel, we set σ=20.0 inches. In the sensing stage, n=40 perturbation patterns are used. The spatial coordinates of the twelve LED fixtures and the twelve color sensors are listed in Table 1, and can be visualized in Figure 10.
4.1.1 Reconstructed volumes
In Figure 14 we show the results for scenarios where only one region is occupied, and in Figure 15 we show the results where two regions are occupied. It is interesting to see that although the precision of the reconstructed volume is very low, the reconstruction quality is good enough for the lighting control module to determine which part of the room is occupied, and what kind of light should be delivered. If better reconstruction quality is required, one simple solution is to increase the number of color sensors. This becomes part of the system design process for an operational smart space. However, since our goal is only roughly estimating the occupancy distribution such that we can decide what lighting condition should be produced, we do not need highresolution and highquality 3D volumes (further discussed in Section “The quality of the estimation”).
4.1.2 Complexity analysis and accelerations
Assume the number of voxels in one volume is N _{ P }. The number of direct light paths is N _{ L }·N _{ S }. To render one volume, we have to evaluate Eq. (9) for N _{ P } voxels, and the number of operations is N _{ P }·N _{ L }·N _{ S } in total. In one operation, we need to compute the pointtoline distance and the Gaussian kernel. In our experiments, N _{ P }=87×136×88, N _{ S }=12, and N _{ L }=12. Thus the number of operations is about 150 million. Our rendering algorithm is implemented in C++. On a Macintosh with 2.5 GHz Intel Core i5 CPU and 8 GB memory, the direct algorithm takes about 18 seconds to render one volume.
One way to accelerate the rendering is to precompute the pointtoline distances and the Gaussian kernels, and keep them in memory. When rendering a new volume, we still need to perform N _{ P }·N _{ L }·N _{ S } operations, but each operation is simply one multiplication and one addition. In this way, on the same machine, precomputation takes about 18 seconds, but rendering each volume takes only 2 seconds. One tradeoff is that such a hashingbased optimization uses much more memory. If each Gaussian kernel is stored as a 64bit doubleprecision floating point number, then it requires about 1 GB memory to keep 150 million Gaussian kernels. To further accelerate the rendering to achieve realtime performance, either parallel computing on a GPU could be used, or the number of voxels could be reduced by downsampling.
4.2 Floorplane confidence maps with light reflection model
For the second approach introduced in Section “Floorplane occupancy mapping with light reflection model”, we place all color sensors on the ceiling. Each sensor is installed close to one LED fixture. The spatial coordinates of the fixtures and the sensors can be found in Table 1. Again, we create nine occupancy scenarios by occupying one or two regions with human and furniture, and discretize the 2D floor plane to pixels of size 1×1 inch ^{2}. The confidence maps computed with Eq. (15) for the nine occupancy scenarios are shown in Figures 16 and 17. We can see that the resulting 2D confidence maps are basically correct when compared to the ground truth — we can see which regions in the room are being occupied. When compared with the results in Figures 14 and 15, we find that the 3D scene reconstruction approach introduced in Section “3D scene reconstruction with light blockage model” produces better estimations than the light reflection model here. This is because in the light blockage model based approach, the color sensors are installed on the walls, thus the zcoordinate information is well captured. But when the sensors are installed on the ceiling at the same height with the fixtures, the zcoordinate information is completely lost. Without such important information, the quality of estimation results is expected to drop.
Since Eq. (15) is only a weighted summation of precomputed reflection kernels, and both N _{ L } and N _{ S } are small, generating a 2D confidence map is very fast.
4.3 Quantitative evaluation
Due to the complexity of a real 3D scene, it is difficult to assess the reconstructed 3D volume or the estimated floorplane 2D confidence map quantitatively. The ground truth is also difficult to represent accurately. To roughly compare the two different approaches, we generate the floorplane ground truth of occupancy distribution for the nine scenarios by assuming a person or a chair is a disk with radius 10 inches on this plane, as shown in Figure 18.
Once we have a 2D ground truth map, we can stretch it into a vector, and compute the correlation coefficient between the ground truth and an estimated 2D map. For the light reflection model, we simply use the floorplane 2D confidence map estimated using Eq. (15) or Eq. (16). For the light blockage model, we use the zaxis integral of the reconstructed 3D volume as the floorplane confidence map. The correlation coefficient lies in the range [ −1,1]. The larger the correlation coefficient is, the better the estimated occupancy map is. For each of the nine scenarios and each of the two approaches, we create multiple instances, and compute the average correlation coefficient of all instances, using different parameters. The mean value of the average correlation coefficients over all nine scenarios can be used as a final score, which we call the mACC (mean average correlation coefficient). Results are reported in Table 2. From this table we observe that the light blockage model has much better performance than the light reflection model. This is expected, because we lose all zcoordinate information when we mount all sensors on the ceiling. Even for the light blockage model, the correlation coefficient values are still mostly smaller than 0.5. This is also expected, partially due to the difficulty of accurately representing the ground truth, partially due to the challenge of the problem itself.
5Discussions
5.1 What is being sensed?
Regarding the novel colorsensorbased occupancy sensing technique introduced in this paper, the most significant question is: What is actually being sensed, compared to other techniques such as PIR or ultrasonic sensors? As we all know, PIR sensors are used to sense the infrared radiation, and ultrasonic sensors are used to measure distances. In our technique, our occupancy estimation is based on a difference matrix between the light transport of an empty room and the light transport of the current room, as described in Section “3D scene reconstruction with light blockage model”. This difference can be caused by either people or furniture. The “empty room” is not necessarily really empty — it is the room condition when the matrix A _{0} is acquired, so more precisely it is the “reference room”. If the reference room is already being occupied, then either removing an occupant or adding a new occupant should produce a difference between the reference transport matrix A _{0} and the current transport matrix A, and thus should be sensed. In a real application of this technique, there should be a calibration button for the user to manually set the present room condition as the reference room.
5.2 Aggregation of the difference matrix
In the two approaches introduced in Section “3D scene reconstruction with light blockage model” and Section “Floorplane occupancy mapping with light reflection model”, respectively, we aggregate the difference matrix E=A _{0}−A to a smaller matrix $\xca$, as discussed in Section “Aggregation of E”. When sensing the occupancy, we are only interested in where the occupant is; we are not concerned with which color channel the occupant affects more. However, this does not mean that the color information measured by the color sensors is not useful. The summation over all three color channels mitigates errors or noise in any single color channel. A system with only one single tunable channel, e.g. brightnesstunable white lighting system, will be much more vulnerable to inaccurate measurements.
5.3 Assumptions in the models
The light blockage model introduced in Section “3D scene reconstruction with light blockage model” assumes that a direct light path exists for any fixturesensor pair. Thus, we install all sensors on the walls to make this assumption true. Apart from this assumption, we also assume that the direct light path is the dominating path, such that any changes in the diffuse reflection paths can be ignored relative to the changes in the direct light path. This assumption is mostly true, but may fail in some special cases. For example, when there is a large mirror in the room, there will be specular reflection paths. These specular reflection paths should be as important as direct light paths, and cannot be ignored.
The light reflection model also has several assumptions apart from assuming all sensors are mounted on the ceiling. First, since we did not consider any reflection by the walls, we are assuming the wall surface reflection can be ignored. Although the total surface area of the walls will almost always be larger than that of the floor, since all fixtures and all sensors are “looking down”, this assumption is acceptable. The second assumption is that the floor surface is Lambertian. This assumption does not have to be true, since if we know the surface is nonLambertian, we simply need to modify Eq. (13) according to the surface property. Another assumption is that we assume the floor plane has uniform albedo. If the floor comprises roughly the same material, the albedo should be similar. However, if half of the floor is wool carpet and half is marble tile, then this assumption does not hold.
5.4 The quality of the estimation
Are the occupancy distribution estimation results shown in Section “Results” good enough? The answer to this question depends on the problem being solved. A researcher from the computer graphics or the tomography community may think the results given in Figures 14, 15, 16 and 17 are unimpressive. However, we are working in a very different regime, under very challenging constraints. To reconstruct a highly accurate highresolution volume reproducing every wrinkle on a Tshirt of an occupant is not our goal. We are controlling the LED fixtures of a smart lighting system, for the purpose of energy efficiency, productivity, and human comfort. We are not producing 3D animations, we are not identifying who is in the room, and we are not using a stage light to follow a dancer precisely. We are simply controlling luminaires that people use everyday, such as in an office, a conference room, or a living room. Thus what we need to know is this: What areas of the room are occupied? We do not need, and for legal reasons should take care not to obtain or use, any information beyond that. Knowing more than needed will raise privacy concerns. Thus, for our smart lighting problem, the occupancy distribution estimation results shown in Section “Results” suffice for the task. We can improve the precision by introducing more sensors, if the reconstruction is too rough.
5.5 Better hardware
In Section “Limitations of current testbed”, we have explained that the limited performance of our current fixtures and sensors impedes us from implementing a realtime smart lighting system with the testbed, although they suffice for the validation experiments in Section “Results”. The current fixture has a delay between the input signals being specified and the desired lighting condition being produced. The current SeaChanger Colorbug sensors have an integration time during which color is measured, and a communication time for WiFi handshaking and data transmission. In the future, faster LEDs will be installed to replace our current fixtures, and more customizable color sensors can be built using lowcost commercially available components. Instead of using WiFi, directly wiring the sensors to the system should significantly reduce the communication delay. Also, we expect that in the future, the color sensor will often be built into the LED fixture circuit as a combined product. This will make it much easier to install, more aesthetically pleasing, and lead to a more affordable complete lighting solution.
5.6 Broader applications
In this paper we have discussed controlling the lighting condition in a space such as an office or a living room. We also point out that in the future this technique may apply to the lighting control of any indoor space. For example, controlling the lighting condition in a barn could improve agricultural productivity. Controlling the lighting condition in a sickroom could accelerate the healing process. We can also control the lighting condition in a hallway, a warehouse, or a large vehicle.
6Conclusion
We have presented a novel technique to estimate the occupancy distribution in an indoor space using colorcontrollable LEDs and sparsely distributed color sensors. This technique can be used to implement occupancysensitive, privacypreserving smart lighting systems. The key idea is to modulate imperceptible perturbations onto the light, and measure the changes in the sensor output to recover a light transport matrix. Two approaches, based on the light blockage model and the light reflection model, respectively, are proposed to estimate the occupancy distribution using the light transport matrix. Due to the small number of fixtures and sensors, and the largely overlapping light fields between different fixturesensor pairs, the occupancy distribution estimation problem is illposed and extremely challenging. The two approaches both produce results that suffice to infer the occupancy scenario in the space, but at the same time are coarse enough to protect the privacy of human residents.
7Endnotes
^{a} Figure 6 needs to be viewed in color to be fully appreciated.
^{b} The room may include furnishings. By “empty” we mean no occupants (humans, animals).
^{c} The unit here is fraction of the sphere, not steradian.
References
 1.
Elgala H, Mesleh R, Haas H: Indoor optical wireless communication: potential and stateoftheart. Commun Mag IEEE 2011, 49(9):56–62. 10.1109/MCOM.2011.6011734
 2.
Afgani MZ, Haas H, Elgala H, Knipp D: Visible light communication using OFDM. In 2nd International Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (TRIDENTCOM 2006), Barcelona, Spain: IEEE; 2006:6.
 3.
Komine T, Nakagawa M: Fundamental analysis for visiblelight communication system using LED lights. IEEE Trans Consum Electron 2004, 50(1):100–107. 10.1109/TCE.2004.1277847
 4.
Komine T, Nakagawa M: Integrated system of white LED visiblelight communication and powerline communication. IEEE Trans Consum Electron 2003, 49(1):71–79. 10.1109/TCE.2003.1205458
 5.
Tanaka Y, Komine T, Haruyama S, Nakagawa M: Indoor visible light data transmission system utilizing white LED lights. IEICE Trans Commun 2003, 86(8):2440–2454.
 6.
Little TD, Dib P, Shah K, Barraford N, Gallagher B: Using led lighting for ubiquitous indoor wireless networking. In Networking and Communications, 2008. WIMOB’08. IEEE International Conference on Wireless and Mobile Computing. IEEE, Avignon, France; 2008:373–378.
 7.
Rea M, Jaekel R: Monitoring occupancy and light operation. Lighting Res Technol 1987, 19(2):45–49. 10.1177/096032718701900203
 8.
Glennie W, Thukral I, Rea M: Lighting control: feasibility demonstration of a new type of system. Lighting Res Technol 1992, 24(4):235–242. 10.1177/096032719202400407
 9.
Delaney DT, O’Hare GM, Ruzzelli AG: Evaluation of energyefficiency in lighting systems using sensor networks. In Proceedings of the First ACM Workshop on Embedded Sensing Systems for EnergyEfficiency in Buildings. ACM, Berkeley; 2009:61–66.
 10.
Agarwal Y, Balaji B, Gupta R, Lyles J, Wei M, Weng T: Occupancydriven energy management for smart building automation. In Proceedings of the 2nd ACM Workshop on Embedded Sensing Systems for EnergyEfficiency in Building. ACM, Zurich; 2010:1–6.
 11.
Aldrich M, Badshah A, Mayton B, Zhao N, Paradiso JA: Random walk and lighting control. In Sensors, 2013 IEEE, Baltimore, MD, USA: IEEE; 2013:1–4.
 12.
Caicedo D, Pandharipande A, Leus G: Occupancybased illumination control of LED lighting systems. Lighting Res Technol 2011, 43(2):217–234. 10.1177/1477153510374703
 13.
Guo X, Tiller D, Henze G, Waters C: The performance of occupancybased lighting control systems: a review. Lighting Res Technol 2010, 42(4):415–431. 10.1177/1477153510376225
 14.
ul Haq MA, Hassan MY, Abdullah H, Rahman HA, Abdullah MP, Hussin F, Said DM: A review on lighting control technologies in commercial buildings, their performance and affecting factors. Renewable Sustainable Energy Rev 2014, 33: 268–279. 10.1016/j.rser.2014.01.090
 15.
Debevec P, Hawkins T, Tchou C, Duiker HP, Sarokin W, Sagar M: Acquiring the reflectance field of a human face. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. ACM Press/AddisonWesley Publishing Co., New Orleans; 2000:145–156.
 16.
Masselus V, Peers P, Willems YD: Relighting with 4d incident light fields. ACM Trans Graph (TOG) 2003, 22(3):613–620. New York: ACM New York: ACM 10.1145/882262.882315
 17.
Wang J, Dong Y, Tong X, Lin Z, Guo B: Kernel Nyström method for light transport. ACM Trans Graph (TOG) 2009, 28(3):29. New York: ACM New York: ACM
 18.
Peers P, Mahajan DK, Lamond B, Ghosh A, Matusik W, Ramamoorthi R, Debevec P: Compressive light transport sensing. ACM Trans Graph (TOG) 2009, 28(1):3. 10.1145/1477926.1477929
 19.
O’Toole M, Kutulakos KN: Optical computing for fast light transport analysis. ACM Trans Graph (TOG) 2010, 29(6):164.
 20.
Sen P, Chen B, Garg G, Marschner SR, Horowitz M, Levoy M, Lensch H: Dual photography. ACM Trans Graph (TOG) 2005, 24(3):745–755. 10.1145/1073204.1073257
 21.
Sen P, Darabi S: Compressive dual photography. Comput Graph Forum 2009, 28(2):609–618. 10.1111/j.14678659.2009.01401.x
 22.
Wetzstein G, Bimber O: Radiometric compensation through inverse light transport. In Pacific Conference on Computer Graphics and Applications, Maui, HI, USA; 2007:391–399.
 23.
Bracewell RN: Strip integration in radio astronomy. Aust J Phys 1956, 9(2):198–217. 10.1071/PH560198
 24.
Kak AC, Slaney M: Principles of Computerized Tomographic Imaging. Society for Industrial and Applied Mathematics, Philadelphia; 2001.
 25.
Gordon R, Bender R, Herman GT: Algebraic reconstruction techniques (ART) for threedimensional electron microscopy and xray photography. J Theor Biol 1970, 29(3):471–481. 10.1016/00225193(70)901098
 26.
Kole J: Statistical image reconstruction for transmission tomography using relaxed ordered subset algorithms. Phys Med Biol 2005, 50(7):1533. 10.1088/00319155/50/7/015
 27.
Rudin LI, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Phys D: Nonlinear Phenomena 1992, 60(1):259–268. 10.1016/01672789(92)90242F
 28.
Laurentini A: The visual hull concept for silhouettebased image understanding. IEEE Trans Pattern Anal Mach Intell 1994, 16(2):150–162. 10.1109/34.273735
 29.
Matusik W, Buehler C, Raskar R, Gortler SJ, McMillan L: Imagebased visual hulls. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. ACM Press/AddisonWesley Publishing Co., New Orleans; 2000:369–374.
 30.
Mostofi Y: Cooperative wirelessbased obstacle/object mapping and seethrough capabilities in robotic networks. IEEE Trans Mobile Comput 2013, 12(5):817–829. 10.1109/TMC.2012.32
 31.
Wilson J, Patwari N: Radio tomographic imaging with wireless networks. IEEE Trans Mobile Comput 2010, 9(5):621–632. 10.1109/TMC.2009.174
 32.
Wason JD, Wen JT: Robot raconteur: a communication architecture and library for robotic and automation systems. In IEEE Conference on Automation Science and Engineering (CASE), Trieste, Italy: IEEE; 2011:761–766.
 33.
Afshari S, Mishra S, Wen J, Karlicek R: An adaptive smart lighting system. In Proceedings of the Fourth ACM Workshop on Embedded Sensing Systems for EnergyEfficiency in Buildings. ACM, Toronto; 2012:201–202.
 34.
Afshari S, Mishra S, Julius A, Lizarralde F, Wen JT: Modeling and feedback control of colortunable LED lighting systems. In American Control Conference (ACC), Montreal, Canada: IEEE; 2012:3663–3668.
 35.
Afshari S, Mishra S, Julius A, Lizarralde F, Wason JD, Wen JT: Modeling and control of color tunable lighting systems. Energ Build 2014, 68 Part A(0):242–253. 10.1016/j.enbuild.2013.08.036
 36.
Jia L, Afshari S, Mishra S, Radke RJ: Simulation for previsualizing and tuning lighting controller behavior. Energ Build 2014, 70(0):287–302. 10.1016/j.enbuild.2013.11.063
 37.
Jia L, Radke RJ: Using timeofflight measurements for privacypreserving tracking in a smart room. IEEE Trans Ind Inform 2014, 10(1):689–696. 10.1109/TII.2013.2251892
 38.
Li H, Chen X, Huang B, Tang D, Chen H: High bandwidth visible light communications based on a postequalization circuit. Photonics Technol Lett IEEE 2014, 26(2):119–122. 10.1109/LPT.2013.2290026
 39.
Khalid A, Cossu G, Corsini R, Choudhury P, Ciaramella E: 1gb/s transmission over a phosphorescent white LED by using rateadaptive discrete multitone modulation. Photonics J IEEE 2012, 4(5):1465–1473. 10.1109/JPHOT.2012.2210397
 40.
Tsonev D, Chun H, Rajbhandari S, McKendry JJD, Videv S, Gu E, Haji M, Watson S, Kelly AE, Faulkner G, Dawson MD, Haas H, O’Brien D: A 3gb/s singleLED OFDMbased wireless VLC link using a gallium nitride μ LED . Photonics Technol Lett IEEE 2014, 26(7):637–640. 10.1109/LPT.2013.2297621
 41.
Cossu G, Khalid A, Choudhury P, Corsini R, Ciaramella E: 3.4 gbit/s visible optical wireless transmission based on RGB LED. Opt Express 2012, 20(26):501–506. 10.1364/OE.20.00B501
 42.
Plackett RL: Some theorems in least squares. Biometrika 1950, 37(1/2):149–157. 10.2307/2332158
 43.
Wang Q, Zhang X, Wang M, Boyer KL: Learning room occupancy patterns from sparsely recovered light transport models. In 22nd International Conference on Pattern Recognition, Stockholm, Sweden; 2014.
 44.
Feng X, Zhang Z: The rank of a random matrix. Appl Math Comput 2007, 185(1):689–694. 10.1016/j.amc.2006.07.076
 45.
Keesey UT: Flicker and pattern detection: a comparison of thresholds. J Opt Soc Am 1972, 62(3):446–448. 10.1364/JOSA.62.000446
 46.
Roufs J: Dynamic properties of vision–i. experimental relationships between flicker and flash thresholds. Vis Res 1972, 12(2):261–278. 10.1016/00426989(72)901174
 47.
Lawler EL, Lenstra JK, Kan AR, Shmoys DB: The Traveling Salesman Problem: a Guided Tour of Combinatorial Optimization, vol. 3. Wiley, New York; 1985.
 48.
Crowder H, Padberg MW: Solving largescale symmetric travelling salesman problems to optimality. Manage Sci 1980, 26(5):495–509. 10.1287/mnsc.26.5.495
 49.
Mühlenbein H, GorgesSchleuter M, Krämer O: Evolution algorithms in combinatorial optimization. Parallel Comput 1988, 7(1):65–85. 10.1016/01678191(88)900981
 50.
Grötschel M, Holland O: Solution of largescale symmetric travelling salesman problems. Math Program 1991, 51(1–3):141–202. 10.1007/BF01586932
 51.
Braun H: On solving travelling salesman problems by genetic algorithms. In Parallel Problem Solving from Nature. Lecture Notes in Computer Science, vol. 496. Edited by: Schwefel HP, Männer R. Springer, Berlin Heidelberg; 1991:129–133.
 52.
Mitchell M: An Introduction to Genetic Algorithms. MIT press, Cambridge; 1998.
 53.
Radon J: On determination of functions by their integral values along certain multiplicities. Ber der Sachische Akademie der Wissenschaften Leipzig, (Germany) 1917, 69: 262–277.
 54.
Deans SR: The Radon Transform and Some of Its Applications. Courier Dover Publications, Mineola; 2007.
 55.
Shepp LA, Logan BF: The Fourier reconstruction of a head section. IEEE Trans Nuclear Sci 1974, 21(3):21–43. 10.1109/TNS.1974.6499235
Acknowledgments
This work was supported primarily by the Engineering Research Centers Program (ERC) of the National Science Foundation under NSF Cooperative Agreement No. EEC0812056 and in part by New York State under NYSTAR contract C090145.
The authors would like to thank Prof. Robert Karlicek for his constructive suggestions. The authors would also like to thank Dr. Zhenhua Huang, Mr. Lawrence Fan, Mr. Sina Afshari, Dr. Li Jia, Mr. Cyril Acholo, Mr. Anqing Liu, Prof. Richard J. Radke, Prof. Sandipan Mishra and Mr. Charles Goodwin for the helpful discussions.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
QW conceived the basic idea of this work, implemented the methods, carried out the experiments, and wrote the manuscript. XZ helped with the data collection, and proposed the genetic algorithm for perturbation ordering. KB supervised the entire project. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wang, Q., Zhang, X. & Boyer, K.L. Occupancy distribution estimation for smart light delivery with perturbationmodulated light sensing. J Sol State Light 1, 17 (2014). https://doi.org/10.1186/s4053901400172
Received:
Accepted:
Published:
Keywords
 Nonimaging sensors
 Perturbation modulation
 Occupancy scenario
 3D reconstruction
 Photometry