Uncertainty Aware Spatiotemporal Perception For Autonomous Vehicles PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Uncertainty Aware Spatiotemporal Perception For Autonomous Vehicles PDF full book. Access full book title Uncertainty Aware Spatiotemporal Perception For Autonomous Vehicles.

Uncertainty-aware Spatiotemporal Perception for Autonomous Vehicles

Uncertainty-aware Spatiotemporal Perception for Autonomous Vehicles
Author: Mikhal Itkina
Publisher:
Total Pages:
Release: 2022
Genre:
ISBN:

Download Uncertainty-aware Spatiotemporal Perception for Autonomous Vehicles Book in PDF, ePub and Kindle

Autonomous vehicles are set to revolutionize transportation in terms of safety and efficiency. However, autonomous systems still have challenges operating in complex human environments, such as an autonomous vehicle in a cluttered, dynamic urban setting. A key obstacle to deploying autonomous systems on the road is understanding, anticipating, and making inferences about human behaviors. Autonomous perception builds a general understanding of the environment for a robot. This includes making inferences about human behaviors in both space and time. Humans are difficult to model due to their vastly diverse behaviors and rapidly evolving objectives. Moreover, in cluttered settings, there are computational and visibility limitations. However, humans also possess desirable capabilities, such as their ability to generalize beyond their observed environment. Although learning-based systems have had success in recent years in modeling and imitating human behavior, efficiently capturing the data and model uncertainty for these systems remains an open problem. This thesis proposes algorithmic advances to uncertainty-aware autonomous perception systems in human environments. We make system-level contributions to spatiotemporal robot perception that reasons about human behavior, and foundational advancements in uncertainty-aware machine learning models for trajectory prediction. These contributions enable robotic systems to make uncertainty- and socially-aware spatiotemporal inferences about human behavior. Traditional robot perception is object-centric and modular, consisting of object detection, tracking, and trajectory prediction stages. These systems can fail prior to the prediction stage due to partial occlusions in the environment. We thus propose an alternative end-to-end paradigm for spatiotemporal environment prediction from a map-centric occupancy grid representation. Occupancy grids are robust to partial occlusions, can handle an arbitrary number of human agents in the scene, and do not require a priori information regarding the environment. We investigate the performance of computer vision techniques in this context and develop new mechanisms tailored to the task of spatiotemporal environment prediction. Spatially, robots also need to reason about fully occluded agents in their environment, which may occur due to sensor limitations or other agents on the road obstructing the field of view. Humans excel at extrapolating from their experiences by making inferences from observed social behaviors. We draw inspiration from human intuition to fill in portions of the robot's map that are not observable by traditional sensors. We infer occupancy in these occluded regions by learning a multimodal mapping from observed human driver behaviors to the environment ahead of them, thus treating people as sensors. Our system handles multiple observed agents to maximally inform the occupancy map around the robot. In order to safely integrate human behavior modeling into the robot autonomy stack, the perception system must efficiently account for uncertainty. Human behavior is often modeled using discrete latent spaces in learning-based models to capture the multimodality in the distribution. For example, in a trajectory prediction task, there may be multiple valid future predictions given a past trajectory. To accurately model this latent distribution, the latent space needs to be sufficiently large, leading to tractability concerns for downstream tasks, such as path planning. We address this issue by proposing a sparsification algorithm for discrete latent sample spaces that can be applied post hoc without sacrificing model performance. Our approach successfully balances multimodality and sparsity to achieve efficient data uncertainty estimation. Aside from modeling data uncertainty, learning-based autonomous systems must be aware of their model uncertainty or what they do not know. Flagging out-of-distribution or unknown scenarios encountered in the real world could be helpful to downstream autonomy stack components and to engineers for further system development. Although the machine learning community has been prolific in model uncertainty estimation for small benchmark problems, relatively little work has been done on estimating this uncertainty in complex, learning-based robotic systems. We propose efficiently learning the model uncertainty over an interpretable, low-dimensional latent space in the context of a trajectory prediction task. The algorithms presented in this thesis were validated on real-world autonomous driving data and baselined against state-of-the-art techniques. We show that drawing inspiration from human-level reasoning while modeling the associated uncertainty can inform environment understanding for autonomous perception systems. The contributions made in this thesis are a step towards uncertainty- and socially-aware autonomous systems that can function seamlessly in human environments.


Learning to Drive

Learning to Drive
Author: David Michael Stavens
Publisher: Stanford University
Total Pages: 104
Release: 2011
Genre:
ISBN:

Download Learning to Drive Book in PDF, ePub and Kindle

Every year, 1.2 million people die in automobile accidents and up to 50 million are injured. Many of these deaths are due to driver error and other preventable causes. Autonomous or highly aware cars have the potential to positively impact tens of millions of people. Building an autonomous car is not easy. Although the absolute number of traffic fatalities is tragically large, the failure rate of human driving is actually very small. A human driver makes a fatal mistake once in about 88 million miles. As a co-founding member of the Stanford Racing Team, we have built several relevant prototypes of autonomous cars. These include Stanley, the winner of the 2005 DARPA Grand Challenge and Junior, the car that took second place in the 2007 Urban Challenge. These prototypes demonstrate that autonomous vehicles can be successful in challenging environments. Nevertheless, reliable, cost-effective perception under uncertainty is a major challenge to the deployment of robotic cars in practice. This dissertation presents selected perception technologies for autonomous driving in the context of Stanford's autonomous cars. We consider speed selection in response to terrain conditions, smooth road finding, improved visual feature optimization, and cost effective car detection. Our work does not rely on manual engineering or even supervised machine learning. Rather, the car learns on its own, training itself without human teaching or labeling. We show this "self-supervised" learning often meets or exceeds traditional methods. Furthermore, we feel self-supervised learning is the only approach with the potential to provide the very low failure rates necessary to improve on human driving performance.


Learning to Drive

Learning to Drive
Author: David Michael Stavens
Publisher:
Total Pages:
Release: 2011
Genre:
ISBN:

Download Learning to Drive Book in PDF, ePub and Kindle

Every year, 1.2 million people die in automobile accidents and up to 50 million are injured. Many of these deaths are due to driver error and other preventable causes. Autonomous or highly aware cars have the potential to positively impact tens of millions of people. Building an autonomous car is not easy. Although the absolute number of traffic fatalities is tragically large, the failure rate of human driving is actually very small. A human driver makes a fatal mistake once in about 88 million miles. As a co-founding member of the Stanford Racing Team, we have built several relevant prototypes of autonomous cars. These include Stanley, the winner of the 2005 DARPA Grand Challenge and Junior, the car that took second place in the 2007 Urban Challenge. These prototypes demonstrate that autonomous vehicles can be successful in challenging environments. Nevertheless, reliable, cost-effective perception under uncertainty is a major challenge to the deployment of robotic cars in practice. This dissertation presents selected perception technologies for autonomous driving in the context of Stanford's autonomous cars. We consider speed selection in response to terrain conditions, smooth road finding, improved visual feature optimization, and cost effective car detection. Our work does not rely on manual engineering or even supervised machine learning. Rather, the car learns on its own, training itself without human teaching or labeling. We show this "self-supervised" learning often meets or exceeds traditional methods. Furthermore, we feel self-supervised learning is the only approach with the potential to provide the very low failure rates necessary to improve on human driving performance.


Autonomous Driving Perception

Autonomous Driving Perception
Author: Rui Fan
Publisher: Springer Nature
Total Pages: 391
Release: 2023-10-06
Genre: Technology & Engineering
ISBN: 981994287X

Download Autonomous Driving Perception Book in PDF, ePub and Kindle

Discover the captivating world of computer vision and deep learning for autonomous driving with our comprehensive and in-depth guide. Immerse yourself in an in-depth exploration of cutting-edge topics, carefully crafted to engage tertiary students and ignite the curiosity of researchers and professionals in the field. From fundamental principles to practical applications, this comprehensive guide offers a gentle introduction, expert evaluations of state-of-the-art methods, and inspiring research directions. With a broad range of topics covered, it is also an invaluable resource for university programs offering computer vision and deep learning courses. This book provides clear and simplified algorithm descriptions, making it easy for beginners to understand the complex concepts. We also include carefully selected problems and examples to help reinforce your learning. Don't miss out on this essential guide to computer vision and deep learning for autonomous driving.


Robust Environmental Perception and Reliability Control for Intelligent Vehicles

Robust Environmental Perception and Reliability Control for Intelligent Vehicles
Author: Huihui Pan
Publisher: Springer Nature
Total Pages: 308
Release: 2023-11-25
Genre: Technology & Engineering
ISBN: 9819977908

Download Robust Environmental Perception and Reliability Control for Intelligent Vehicles Book in PDF, ePub and Kindle

This book presents the most recent state-of-the-art algorithms on robust environmental perception and reliability control for intelligent vehicle systems. By integrating object detection, semantic segmentation, trajectory prediction, multi-object tracking, multi-sensor fusion, and reliability control in a systematic way, this book is aimed at guaranteeing that intelligent vehicles can run safely in complex road traffic scenes. Adopts the multi-sensor data fusion-based neural networks to environmental perception fault tolerance algorithms, solving the problem of perception reliability when some sensors fail by using data redundancy. Presents the camera-based monocular approach to implement the robust perception tasks, which introduces sequential feature association and depth hint augmentation, and introduces seven adaptive methods. Proposes efficient and robust semantic segmentation of traffic scenes through real-time deep dual-resolution networks and representation separation of vision transformers. Focuses on trajectory prediction and proposes phased and progressive trajectory prediction methods that is more consistent with human psychological characteristics, which is able to take both social interactions and personal intentions into account. Puts forward methods based on conditional random field and multi-task segmentation learning to solve the robust multi-object tracking problem for environment perception in autonomous vehicle scenarios. Presents the novel reliability control strategies of intelligent vehicles to optimize the dynamic tracking performance and investigates the completely unknown autonomous vehicle tracking issues with actuator faults.


Motion Planning for Autonomous Vehicles in Partially Observable Environments

Motion Planning for Autonomous Vehicles in Partially Observable Environments
Author: Taş, Ömer Şahin
Publisher: KIT Scientific Publishing
Total Pages: 222
Release: 2023-10-23
Genre:
ISBN: 3731512998

Download Motion Planning for Autonomous Vehicles in Partially Observable Environments Book in PDF, ePub and Kindle

This work develops a motion planner that compensates the deficiencies from perception modules by exploiting the reaction capabilities of a vehicle. The work analyzes present uncertainties and defines driving objectives together with constraints that ensure safety. The resulting problem is solved in real-time, in two distinct ways: first, with nonlinear optimization, and secondly, by framing it as a partially observable Markov decision process and approximating the solution with sampling.


Spatiotemporal Occupancy Prediction for Autonomous Driving

Spatiotemporal Occupancy Prediction for Autonomous Driving
Author: Maneekwan Toyungyernsub
Publisher:
Total Pages: 0
Release: 2023
Genre:
ISBN:

Download Spatiotemporal Occupancy Prediction for Autonomous Driving Book in PDF, ePub and Kindle

Advancements in robotics, computer vision, machine learning and hardware have contributed to impressive developments of autonomous vehicles. However, there still exist challenges that must be tackled in order for the autonomous vehicles to be safely and seamlessly integrated into human environments. This is particularly the case in dense and cluttered urban settings. Autonomous vehicles must be able to understand and anticipate how their surroundings will evolve in both time and space. This capability will allow the autonomous vehicles to proactively plan safe trajectories and avoid other traffic agents. A common prediction approach is an agent-centric method (e.g., pedestrian or vehicle trajectory prediction). These methods require detection and tracking of all agents in the environment since trajectory prediction is performed on each agent. An alternative approach is a map-based (e.g., occupancy grid map) prediction method where the entire environment is discretized into grid cells and the collective occupancy probabilities for each grid cell are predicted. Hence, object detection and tracking capability is generally not needed. This makes a map-based occupancy prediction approach more robust to partial object occlusions and is capable of handling any arbitrary number of agents in the environments. However, a common problem with occupancy grid map prediction is the vanishing of objects from the predictions, especially at longer time horizons. In this thesis, we consider the problem of spatiotemporal environment prediction in urban environments. We merge tools from robotics, computer vision and deep learning to develop spatiotemporal occupancy prediction frameworks that leverage environment information. In our first research work, we developed an occupancy prediction methodology that leverages environment dynamic information, in terms of static-dynamic parts of the environment. Our model learns to predict the spatiotemporal evolution of the static and dynamic parts of the environment input separately and outputs the final occupancy grid map predictions of the entire environment. In our second research work, we further developed the prediction framework to be modular, by adding a learning-based static-dynamic segmentation module upstream of the occupancy prediction module. The addition addressed previous limitations that require the static and dynamic parts of the environment to be known in advance. Lastly, we developed an environment prediction framework that leverages environment semantic information. Our proposed model consists of two sub-modules, which are future semantic segmentation prediction and occupancy prediction. We proposed to represent environment semantics in the form of semantic gird maps that are similar to the occupancy grid representation. This allows a direct flow of semantic information to the occupancy prediction sub-module. Experiments validated on the real-world driving dataset show that our methods outperform other state-of-the-art models and reduce the issue of vanishing object in the predictions at longer time horizons.