The following demo shows human's ability to determine depth ordering from only two frames, even when there is no visible edge between layers.
This demo displays two-frame sequences of random dots containing two moving layers. When there's a density difference between the layers, even if it is quite small, it's possible to determine which layer is in the front with a high success rate. When there's no difference, performance is no better than chance. This is in contrast to three-frame sequences (also in the demo), where it's possible to determine depth even when there's no density difference.
The following movie shows part of an experiment on incoherencies detection. It shows the virtual environment as an observer sees it via an HMD (Head Mounted Display) device. This environment is used in our research to measure altered reality perception in schizophrenia patients.
In the movie, a subject moves in a virtual city. Incoherent events appear once in a while, where objects with mismatching color, location or sound appear. In such case the movement is stopped, a red bar appears, and the subject is asked to identify the incoherent object (but no input is requested in the demo).
The following synthetic movies (in dolly, or forward, motion) were generated from a sequence taken by a sideways moving camera:
room sequence - note change in camera orientation; (original seq)
building entrance - note change in reflections; (original seq)
car sequence - note change in orientation; (original seq)
coast sequence - note large image compensation; coast sequence rotating; (original seq)
horse sequence; (original seq, taken by a camera rotating around the object)
The following demonstrations were generated by our new view generation algorithm, described in "New view generation with a bi-centric camera", Proceedings: seventh European Conference of Computer Vision, Copenhagen, May 2002.
synthetic lab sequence, simulating forward motion
synthetic lab sequence, simulating sideways and forward motion
synthetic cafeteria sequence, simulating forward motion
synthetic cafeteria stereo sequence, simulating forward motion
to view, use red-green stereo glasses and put the red lense on the left eyesynthetic cafeteria sequence, simulating sideways and forward motion
The following demonstrations were generated by our pointing target detection approach algorithm, described in "A Computer Vision System for On-Screen Item Selection by Finger Pointing", In Proceedings: IEEE Conference on Computer Vision and Pattern Recognition, I:1026-1033, Hawaii, Dec 2001.
selecting locations on the screen
The following demonstrations were generated by the curve matching algorithm described in "Hierarchical Silhouettes Classification using Curve Matching", Proceedings: DARPA Image Understanding Workshop, New-Orleans, May 1997 (available on-line). The comparison of each image pair took about 4-6 seconds on an Indigo II. The results are illustrated by tracking the two curves in synchrony, in accordance with the way they were matched.
matching 2 different views of a wolf
(note how the front legs, both visible in one image and one occluded in the other image, are matched to each other)matching 2 different views of a hyppopothamus
matching a rotated cow to a hyppopothamus
(note that in this example, as in all others, the initial matching point was chosen automatically)The following demonstrations were generated by the motion tracking algorithm described in "Motion of disturbances: detection and tracking of multi-body non-rigid motion", Proceedings: IEEE Conference on Computer Vision and Pattern Recognition, San-Juan, June 1997 (available on-line).
Many moving objects under camouflage: tracking individual ants in an ant column on the forest floor. A complex rapids sequence: the trajectories follow the flow of the water correctly, and the particles appear to be carried downstream by the water. A sequence taken with a downward moving camera in a shopping mall, while the people in the scene are moving in different directions: 3 figures are reliably tracked. A sequence taken by a non-stabilized camera in a traffic intersection: cars moving straight and cars turning left (whose 2-D projection changes non-rigidly) are tracked.