In collaboration with Jeremy Sherman. We were looking to create a large scale projection of faces in the urban landscape. The face expression will interact with the city sounds: low, mid and high. In our first experiment, we controlled the image saturation by the sound level. We used the VJ software Modul8. Whenever it was louder the image became more saturated. The interaction was between a person, speaking into an external computer mic, to the video.
In our second experiment, we controlled the image by motion detection. We programed a processing sketch which detects whether there is a motion or not, using a webcam. Then, we divided the video input into five areas. This enabled us to detect the areas in motion. The image changed by the motion in each areas - followed by a movement passing in front of the camera. The face expression of the person changed by the motion in the different areas. When there was no motion he seemed bored and when there was high motion he seemed overwhelmed.
In our third experiment, we projected the image on a big scale. This allowed us to explore the interaction and the proportion of a man in front of the projection, while the image is “following” him by his eyes.