3-D Point Cloud Generation from Rigid and Flexible Stereo Vision Systems

Files
TR Number
Date
2009-12-04
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

When considering the operation of an Unmanned Aerial Vehicle (UAV) or an Unmanned Ground Vehicle (UGV), such problems as landing site estimation or robot path planning become a concern. Deciding if an area of terrain has a level enough slope and a wide enough area to land a Vertical Take Off and Landing (VTOL) UAV or if an area of terrain is traversable by a ground robot is reliant on data gathered from sensors, such as cameras. 3-D models, which can be built from data extracted from digital cameras, can help facilitate decision making for such tasks by providing a virtual model of the surrounding environment the system is in. A stereo vision system utilizes two or more cameras, which capture images of a scene from two or more viewpoints, to create 3-D point clouds. A point cloud is a set of un-gridded 3-D points corresponding to a 2-D image, and is used to build gridded surface models. Designing a stereo system for distant terrain modeling requires an extended baseline, or distance between the two cameras, in order to obtain a reasonable depth resolution. As the width of the baseline increases, so does the flexibility of the system, causing the orientation of the cameras to deviate from their original state. A set of tools have been developed to generate 3-D point clouds from rigid and flexible stereo systems, along with a method for applying corrections to a flexible system to regain distance accuracy in a flexible system.

Description
Keywords
Stereo Vision, Drone aircraft, VTOL, Camera Calibration, Terrain Mapping
Citation
Collections