Date of Award

5-2011

Document Type

Thesis

Degree Name

Master of Science (MS)

Legacy Department

Electrical Engineering

Committee Chair/Advisor

Birchfield, Stan

Committee Member

Schalkoff , Robert

Committee Member

Gowdy , John

Abstract

Feature tracking algorithms have conventionally tracked `corner' features or windows with high spatial frequency content. However, this conventional point feature representation of scenes would be inappropriate for poorly textured image sequences like indoor image sequences. To overcome this problem, we propose a feature tracking algorithm which tracks point features and edgelets
simultaneously. Edgelets are straight line approximations of intensity edges in an image. Hence, a combination of point features and edgelets provides a better representation of untextured sequences with the point features and edgelets complementing each other. We show that this property results in more robust tracking.
Tracking edgelets is challenging due to the inherent aperture problem. This thesis proposes an optical flow-based tracking method to track both point features and edgelets in a combined fashion. The aperture problem of the edgelets is overcome using Horn-Schunck regularisation to penalize flow vector deviations from those of the neighboring features. This method uses a translational motion model for tracking individual features and hence only the change in displacement of the point features and edgelets is computed.
The point features are detected using the Shi-Tomasi method. The edgelets are detected using the Canny edge map and Douglas-Peucker polyline approximation algorithm. It is assumed that motion will be constant in a neighborhood around the point feature and for all edgels in an edgelet. The point features and edgelets are tracked by minimizing an energy function consisting of the optical flow constraint and sum of negative gradient magnitude of edgels. Thus the edgelets which are typically attracted to the nearby intensity edges will be guided by the optical flow constraint equation and by the motion of the neighboring features.
We have also implemented a pyramidal implementation of our algorithm with the uppermost pyramidal level representing the original image and the lower pyramidal levels consisting of the
downsampled images. The unified feature tracking method utilizes a pyramidal implementation which respects the scale at which the features are visible. Hence, the motion vector computation
at lower pyramidal levels is reliable and improves the tracking robustness. Moreover, the average flow vector due to the neighboring features is computed by fitting an affine motion model to the
neighboring features. The neighboring features are weighted based on their distance from the feature and the pyramidal level at which they are visible.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.