RESEARCH NEWS

ARCHIVE



ECOVISION HOME




...


A Biologically Inspired Optic Flow Algorithm - Real-Time


This image shows the output from our real-time, biologically inspired, motion computation algorithm. The example image sequence of a london taxi performing a U-turn is split into four sections with the top two images being the original and temporally filtered input. The bottom left shows the computed velocity encoded as a grey-scale with brighter greys indicating faster motion. The bottom right shows the computed direction as given by the colour wheel in the border.

February 10, 2003 — The computation of visual motion or "optic flow" is a difficult problem in computer vision. Successful strategies require sophisticated algorithms and involve processing the vast quantities of information contained within image sequences. However, if machine vision systems are ever to interact usefully in a dynamic real-world environment then these computations must be carried out robustly and quickly. Fortunately, biology provides us with a template of a working real-time vision system from which we can draw upon when designing our algorithms. This process of copying neurobiological systems and transferring them into computer architectures allows us to capitalise on nature’s robust solutions to difficult problems such as vision. However, the visual cortex is a massively parallel system capable of far more computation than present image processing technology. There are serious practical problems with trying to get a biologically inspired system to work fast enough to be useful. We have developed a highly optimised implementation of the Multi-Channel Gradient Model of motion perception, an algorithm based on the motion pathway of the visual cortex. A series of mathematical shortcuts and programming optimisations have enabled us to implement this sophisticated and versatile algorithm as a real-time system. Originally, the algorithm used over 600 spatio-temporal convolutions to simulate the receptive field responses found in the visual cortex. In order to compute every frame of motion, a process that took several minutes on even the fastest of computers. We have developed routines to radically cut down the number of filtering operations speeding up the throughput enormously. In addition, recursive temporal filters now limit the system latency to just 2 frames. The software accepts input from either a PXC200 low-cost framegrabber, a sequence of bitmaps, an AVI or even a webcam. The software has been written using a combination of Intel`s Performance Libraries and our own hand coded MMX/SSE assembler routines, making it possible to run this complicated motion algorithm in real time (20fps@120x80res) using commercially available PC hardware. We are now at the stage where we can begin to apply it in broad range of scenarios with a long term view of developing an industrially useful piece of instrumentation. Currently, the algorithm is being evaluated for an embedded implementation on an FPGA. We are also implementing a novel adaptive space-variant sampling technique to deal with the wide range of velocities present in typical image sequences, without resorting to a multiple layed (e.g pyramid) approach. More information on the Multi-channel Gradient model and its real-time implementation can be found on Jason Dale`s Web Page and the paper on the model can be downloaded from Alan Johnston`s web page


Jason Dale
Alan Johnston
Department of Psychology
University College London



Top of Page


Date Modified: February 11, 2003 by S.P. Sabatini