Vision Based Mobile Robot Navigation In Unknown Environments Using Natural Landmarks PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Vision Based Mobile Robot Navigation In Unknown Environments Using Natural Landmarks PDF full book. Access full book title Vision Based Mobile Robot Navigation In Unknown Environments Using Natural Landmarks.

Machine Learning-based Natural Scene Recognition for Mobile Robot Localization in An Unknown Environment

Machine Learning-based Natural Scene Recognition for Mobile Robot Localization in An Unknown Environment
Author: Xiaochun Wang
Publisher: Springer
Total Pages: 328
Release: 2019-08-12
Genre: Technology & Engineering
ISBN: 981139217X

Download Machine Learning-based Natural Scene Recognition for Mobile Robot Localization in An Unknown Environment Book in PDF, ePub and Kindle

This book advances research on mobile robot localization in unknown environments by focusing on machine-learning-based natural scene recognition. The respective chapters highlight the latest developments in vision-based machine perception and machine learning research for localization applications, and cover such topics as: image-segmentation-based visual perceptual grouping for the efficient identification of objects composing unknown environments; classification-based rapid object recognition for the semantic analysis of natural scenes in unknown environments; the present understanding of the Prefrontal Cortex working memory mechanism and its biological processes for human-like localization; and the application of this present understanding to improve mobile robot localization. The book also features a perspective on bridging the gap between feature representations and decision-making using reinforcement learning, laying the groundwork for future advances in mobile robot navigation research.


Vision Based Autonomous Robot Navigation

Vision Based Autonomous Robot Navigation
Author: Amitava Chatterjee
Publisher: Springer
Total Pages: 235
Release: 2012-10-13
Genre: Technology & Engineering
ISBN: 3642339654

Download Vision Based Autonomous Robot Navigation Book in PDF, ePub and Kindle

This monograph is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book describes successful implementation of integration of low-cost, external peripherals, with off-the-shelf procured robots. An important highlight of the book is that it presents a detailed, step-by-step sample demonstration of how vision-based navigation modules can be actually implemented in real life, under 32-bit Windows environment. The book also discusses the concept of implementing vision based SLAM employing a two camera based system.


Mobile Robot Navigation Using a Vision Based Approach

Mobile Robot Navigation Using a Vision Based Approach
Author: Mehmet Serdar Güzel
Publisher:
Total Pages:
Release: 2012
Genre:
ISBN:

Download Mobile Robot Navigation Using a Vision Based Approach Book in PDF, ePub and Kindle

This study addresses the issue of vision based mobile robot navigation in a partially cluttered indoor environment using a mapless navigation strategy. The work focuses on two key problems, namely vision based obstacle avoidance and vision based reactive navigation strategy. The estimation of optical flow plays a key role in vision based obstacle avoidance problems, however the current view is that this technique is too sensitive to noise and distortion under real conditions. Accordingly, practical applications in real time robotics remain scarce. This dissertation presents a novel methodology for vision based obstacle avoidance, using a hybrid architecture. This integrates an appearance-based obstacle detection method into an optical flow architecture based upon a behavioural control strategy that includes a new arbitration module. This enhances the overall performance of conventional optical flow based navigation systems, enabling a robot to successfully move around without experiencing collisions. Behaviour based approaches have become the dominant methodologies for designing control strategies for robot navigation. Two different behaviour based navigation architectures have been proposed for the second problem, using monocular vision as the primary sensor and equipped with a 2-D range finder. Both utilize an accelerated version of the Scale Invariant Feature Transform (SIFT) algorithm. The first architecture employs a qualitative-based control algorithm to steer the robot towards a goal whilst avoiding obstacles, whereas the second employs an intelligent control framework. This allows the components of soft computing to be integrated into the proposed SIFT-based navigation architecture, conserving the same set of behaviours and system structure of the previously defined architecture. The intelligent framework incorporates a novel distance estimation technique using the scale parameters obtained from the SIFT algorithm. The technique employs scale parameters and a corresponding zooming factor as inputs to train a neural network which results in the determination of physical distance. Furthermore a fuzzy controller is designed and integrated into this framework so as to estimate linear velocity, and a neural network based solution is adopted to estimate the steering direction of the robot. As a result, this intelligent iv approach allows the robot to successfully complete its task in a smooth and robust manner without experiencing collision. MS Robotics Studio software was used to simulate the systems, and a modified Pioneer 3-DX mobile robot was used for real-time implementation. Several realistic scenarios were developed and comprehensive experiments conducted to evaluate the performance of the proposed navigation systems. KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant Feature Transforms, Intelligent framework.


Robot Navigation from Nature

Robot Navigation from Nature
Author: Michael John Milford
Publisher: Springer Science & Business Media
Total Pages: 203
Release: 2008-02-11
Genre: Technology & Engineering
ISBN: 3540775196

Download Robot Navigation from Nature Book in PDF, ePub and Kindle

This pioneering book describes the development of a robot mapping and navigation system inspired by models of the neural mechanisms underlying spatial navigation in the rodent hippocampus. Computational models of animal navigation systems have traditionally had limited performance when implemented on robots. This is the first research to test existing models of rodent spatial mapping and navigation on robots in large, challenging, real world environments.


Learning and Vision Algorithms for Robot Navigation

Learning and Vision Algorithms for Robot Navigation
Author: Margrit Betke
Publisher:
Total Pages: 128
Release: 1995
Genre: Machine learning
ISBN:

Download Learning and Vision Algorithms for Robot Navigation Book in PDF, ePub and Kindle

Abstract: "This thesis studies problems that a mobile robot encounters while it is navigating through its environment. The robot either explores an unknown environment or navigates through a somewhat familiar environment. The thesis addresses the design of algorithms for 1. environment learning, 2. position estimation using landmarks, 3. visual landmark recognition. In the area of mobile robot environment learning, we introduce the problem of piecemeal learning of an unknown environment: the robot must return to its starting point after each piece of exploration. We give linear time algorithms for exploring environments modeled as grid- graphs with rectangular obstacles. Our best algorithm for piecemeal learning of arbitrary undirected graphs runs in almost linear time. It is crucial for a mobile robot to be able to localize itself in its environment. We describe a linear time algorithm for localizing the mobile robot in an environment with landmarks. The robot can identify these landmarks and measure their bearings. Given such noisy meaurements, the algorithm estimates the robot's position and orientation with respect to the map of the environment. The algorithm makes efficient use of our representation of the landmarks by complex numbers. The thesis also addresses the problem of how landmarks in the robot's surroundings can be recognized visually. We introduce an efficient, model-based recognition algorithm that exploits a fast version of simulated annealing. To avoid false recognition, we propose a method to select model images by measuring the information content of the images. The performance of the algorithm is demonstrated with real-world images of traffic signs."


Principles of Robot Motion

Principles of Robot Motion
Author: Howie Choset
Publisher: MIT Press
Total Pages: 642
Release: 2005-05-20
Genre: Technology & Engineering
ISBN: 9780262033275

Download Principles of Robot Motion Book in PDF, ePub and Kindle

A text that makes the mathematical underpinnings of robot motion accessible and relates low-level details of implementation to high-level algorithmic concepts. Robot motion planning has become a major focus of robotics. Research findings can be applied not only to robotics but to planning routes on circuit boards, directing digital actors in computer graphics, robot-assisted surgery and medicine, and in novel areas such as drug design and protein folding. This text reflects the great advances that have taken place in the last ten years, including sensor-based planning, probabalistic planning, localization and mapping, and motion planning for dynamic and nonholonomic systems. Its presentation makes the mathematical underpinnings of robot motion accessible to students of computer science and engineering, rleating low-level implementation details to high-level algorithmic concepts.


Vision-based Robot Localization Using Artificial and Natural Landmarks

Vision-based Robot Localization Using Artificial and Natural Landmarks
Author:
Publisher:
Total Pages:
Release: 2004
Genre:
ISBN:

Download Vision-based Robot Localization Using Artificial and Natural Landmarks Book in PDF, ePub and Kindle

In mobile robot applications, it is an important issue for a robot to know where it is. Accurate localization becomes crucial for navigation and map building applications because both route to follow and positions of the objects to be inserted into the map highly depend on the position of the robot in the environment. For localization, the robot uses the measurements that it takes by various devices such as laser rangefinders, sonars, odometry devices and vision. Generally these devices give the distances of the objects in the environment to the robot and proceesing these distance information, the robot finds its location in the environment. In this thesis, two vision-based robot localization algorithms are implemented. The first algorithm uses artificial landmarks as the objects around the robot and by measuring the positions of these landmarks with respect to the camera system, the robot locates itself in the environment. Locations of these landmarks are known. The second algorithm instead of using artificial landmarks, estimates its location by measuring the positions of the objects that naturally exist in the environment. These objects are treated as natural landmarks and locations of these landmarks are not known initially. A three-wheeled robot base on which a stereo camera system is mounted is used as the mobile robot unit. Processing and control tasks of the system is performed by a stationary PC. Experiments are performed on this robot system. The stereo camera system is the measurement device for this robot.


Vision-based Navigation for Mobile Robots on Ill-structured Roads

Vision-based Navigation for Mobile Robots on Ill-structured Roads
Author: Hyun Nam Lee
Publisher:
Total Pages:
Release: 2010
Genre:
ISBN:

Download Vision-based Navigation for Mobile Robots on Ill-structured Roads Book in PDF, ePub and Kindle

Autonomous robots can replace humans to explore hostile areas, such as Mars and other inhospitable regions. A fundamental task for the autonomous robot is navigation. Due to the inherent difficulties in understanding natural objects and changing environments, navigation for unstructured environments, such as natural environments, has largely unsolved problems. However, navigation for ill-structured environments [1], where roads do not disappear completely, increases the understanding of these difficulties. We develop algorithms for robot navigation on ill-structured roads with monocular vision based on two elements: the appearance information and the geometric information. The fundamental problem of the appearance information-based navigation is road presentation. We propose a new type of road description, a vision vector space (V2-Space), which is a set of local collision-free directions in image space. We report how the V2-Space is constructed and how the V2-Space can be used to incorporate vehicle kinematic, dynamic, and time-delay constraints in motion planning. Failures occur due to the limitations of the appearance information-based navigation, such as a lack of geometric information. We expand the research to include consideration of geometric information. We present the vision-based navigation system using the geometric information. To compute depth with monocular vision, we use images obtained from different camera perspectives during robot navigation. For any given image pair, the depth error in regions close to the camera baseline can be excessively large. This degenerated region is named untrusted area, which could lead to collisions. We analyze how the untrusted areas are distributed on the road plane and predict them accordingly before the robot makes its move. We propose an algorithm to assist the robot in avoiding the untrusted area by selecting optimal locations to take frames while navigating. Experiments show that the algorithm can significantly reduce the depth error and hence reduce the risk of collisions. Although this approach is developed for monocular vision, it can be applied to multiple cameras to control the depth error. The concept of an untrusted area can be applied to 3D reconstruction with a two-view approach.