All I Have Seen

[...In order to study the properties of total visual input in humans, for around two weeks Nebojsa wore a camera capturing, on average, an image per every 20 seconds of his waking hours.

The resulting new dataset contains a mix of indoor and outdoor scenes as well as numerous foreground objects.

Our first analysis goal is to create a visual summary of the subjects two weeks of life using unsupervised algorithms that would automatically discover recurrent scenes, familiar faces or common actions. Direct application of existing algorithms, such as panoramic stitching (e.g.  Photosynth) or appearance-based clustering models (e.g. the epitome), is impractical due to either the large dataset size or the dramatic variation in the lighting conditions.

We dubbed this type of data "All I have Seen" (AIHS, meant to be pronounced similar to "eyes"). While these types of datasets have been assembled before, it is our belief that with the proliferation of mobile devices and the availability of cloud computing, the time is now more appropriate than ever for research into this type of data acquisition, unsupervised techniques for data analysis and applications on top of them.]

Read more