Depth estimation
The estimation of the position of objects in the real world is a fundamental goal in both biological and artificial early vision processing (e.g., in autonomous navigation and grasping tasks, recovering the distance of objects from the observer (depth) is necessary to obtain reliable performances). The ability to discriminate differences in depth is greatly enhanced by stereoscopic vision, i.e., by measurements of structure in space from spatially separated cameras (left and right stereo views of a 3-D scene). In this case, depth estimation is mainly based on the measurements of binocular disparities defined as the local shift of the relative spatial location of images of the same object on the retinae of the two eyes. The problem of determining binocular disparity has an elegant formulation in the frequency domain, in relation to the so-called phase-based matching methods [Sanger, 1988], in which disparity estimation is computed as the shift necessary to align the phase values of the band-pass filtered (with complex-valued Gabor kernels) version of the binocular stereo signals. In contrast to other classical techniques based on search of correspondences, the choice of a phase based approach is driven both by (i) the performances of algorithm (all operations are completely local and hyperacuity is possible down to the physical limits on the image acquisition system) and by (ii) the possibility of mapping the algorithm directly in analog VLSI circuits based on a layered networks of local interacting nodes (lattice networks), that we analyzed and implemented as analog circuits.

We proposed an improved version of Sanger's algorithm which yields a more reliable depth map and confidence measure, also avoiding the problem of phase difference unwrapping.
Examples of disparity estimations (depth maps) obtained by our algorithm can be seen here.