The Active Light Cloud was conceived as an innovative approach to how we light the world of tomorrow. Using advanced computer vision that tracks any human movement against the basic static interior space, and inductively expands on that gesture to create an expanded field of pixel vectors, the system is able to predict the user’s particular lighting needs. Users can throw light down a dark hallway or bring a cluster of task light with a single gesture. Most importantly it knows to turn off the lights once the room is empty, dramatically reducing energy consumption.
In addition to the initial reason for the project, it has become evident that the work is relevant in a number of areas beyond that of lighting. Innovative in this system is the use of computer processed cameras to automate, indeed activate a designed environment that is responsive to the user. This is important as it brings us closer to thinking of the built environment as an intelligent, adaptive, and responsive platform for engagement. This work has spun off several new projects that I am now working on that further the notion of responsive and activated architecture.
The functional prototype of Active Cloud Lighting was developed in collaboration with SAIC’s world renowned art and technology professors, most notably Matt Nelson, Systems Programmer, Ed Bennet, Electronic Systems Designer, Anna Yu,Systems Production Supervisor, and John Manning, System Executive Producer. The system’s programmed intelligence can be extended into a wide range of commercial and residential situations. Light switches might soon be replaced by a wide variety of gestures, controlling the affect and impact of our interior lighting in more intimate, fluid, intelligent, and environmentally conscious manner.