Home
    • Login
    View Item 
    •   TDL DSpace Home
    • Federated Electronic Theses and Dissertations
    • Texas A&M University at College Station
    • View Item
    •   TDL DSpace Home
    • Federated Electronic Theses and Dissertations
    • Texas A&M University at College Station
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Vision-based Navigation for Mobile Robots on Ill-structured Roads

    Thumbnail
    Date
    2010-01-16
    Author
    Lee, Hyun Nam
    Metadata
    Show full item record
    Abstract
    Autonomous robots can replace humans to explore hostile areas, such as Mars and other inhospitable regions. A fundamental task for the autonomous robot is navigation. Due to the inherent difficulties in understanding natural objects and changing environments, navigation for unstructured environments, such as natural environments, has largely unsolved problems. However, navigation for ill-structured environments [1], where roads do not disappear completely, increases the understanding of these difficulties. We develop algorithms for robot navigation on ill-structured roads with monocular vision based on two elements: the appearance information and the geometric information. The fundamental problem of the appearance information-based navigation is road presentation. We propose a new type of road description, a vision vector space (V2-Space), which is a set of local collision-free directions in image space. We report how the V2-Space is constructed and how the V2-Space can be used to incorporate vehicle kinematic, dynamic, and time-delay constraints in motion planning. Failures occur due to the limitations of the appearance information-based navigation, such as a lack of geometric information. We expand the research to include consideration of geometric information. We present the vision-based navigation system using the geometric information. To compute depth with monocular vision, we use images obtained from different camera perspectives during robot navigation. For any given image pair, the depth error in regions close to the camera baseline can be excessively large. This degenerated region is named untrusted area, which could lead to collisions. We analyze how the untrusted areas are distributed on the road plane and predict them accordingly before the robot makes its move. We propose an algorithm to assist the robot in avoiding the untrusted area by selecting optimal locations to take frames while navigating. Experiments show that the algorithm can significantly reduce the depth error and hence reduce the risk of collisions. Although this approach is developed for monocular vision, it can be applied to multiple cameras to control the depth error. The concept of an untrusted area can be applied to 3D reconstruction with a two-view approach.
    URI
    http://hdl.handle.net/1969.1/ETD-TAMU-2008-08-39
    Collections
    • Texas A&M University at College Station

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    TDL
    Theme by @mire NV
     

     

    Browse

    All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    DSpace software copyright © 2002-2016  DuraSpace
    Contact Us | Send Feedback
    TDL
    Theme by @mire NV