2009 IEEE International Conference on
Systems, Man, and Cybernetics |
![]() |
Abstract
We present a real-time smoothing methodology for the stabilization of videos captured from small robotic helicopter platforms. We suppress supposedly unintended high-frequency motions considering relative rotation and displacements between successive frames. Assuming that camera movement dominates the motion field of aerial footage, we propose three different options to model global motion: a similar, an affine or a bilinear transformation. We found that all these models can effectively be employed to stabilize video and should be used depending on expected camera motion. In our implementation all transformations can be estimated by iterative least squares (LS), and an affine model can also be adjusted by a proposed iterative total least squares (TLS) procedure. Field experiments were carried out with a tele-operated helicopter that transmits wireless video to a receiver in ground where video is processed. With this configuration we stabilized video at speeds between 20 and 28 fps, while surpassing problems generated because of the presence of high-levels of noise. Extending our digital smoother with more complex vision understanding processes seems straightforward given its flexibility and robustness.