Learning Mobile Manipulation Actions From Human Demonstrations An Approach To Learning And Augmenting Action Models And Their Integration Into Task Representations PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Learning Mobile Manipulation Actions From Human Demonstrations An Approach To Learning And Augmenting Action Models And Their Integration Into Task Representations PDF full book. Access full book title Learning Mobile Manipulation Actions From Human Demonstrations An Approach To Learning And Augmenting Action Models And Their Integration Into Task Representations.

Learning Mobile Manipulation Actions from Human Demonstrations: an Approach to Learning and Augmenting Action Models and Their Integration Into Task Representations

Learning Mobile Manipulation Actions from Human Demonstrations: an Approach to Learning and Augmenting Action Models and Their Integration Into Task Representations
Author: Tim Welschehold
Publisher:
Total Pages:
Release: 2020
Genre:
ISBN:

Download Learning Mobile Manipulation Actions from Human Demonstrations: an Approach to Learning and Augmenting Action Models and Their Integration Into Task Representations Book in PDF, ePub and Kindle

Abstract: While incredible advancements in robotics have been achieved over the last decade, direct physical interaction with an initially unknown and dynamic environment is still a challenging problem. In order to use robots as service assistants and take over household chores in the user's home environment, they must be able to perform goal directed manipulation tasks autonomously and further, learn these task intuitively from their owners. Consider for instance the task of setting a breakfast table: Although it is a relatively simple task for a human being, it poses some serious challenges to the robot. It must physically handle the users customized household environment and the objects therein, i.e., how can the items needed to set up the table be grasped and moved, how can the kitchen cabinets be opened, etc. Additionally the personal preferences of the user on how the breakfast table should be arranged must be respected. Due to the diverse characteristics of the custom objects and the individual human needs even a standard task like setting a breakfast table is impossible to pre-program before knowing the place of use and its occurrences. Therefore, the most promising way to engage robots as domestic help is to enable them to learn the tasks they should perform directly by their owners, without requiring the owner to possess any special knowledge of robotics or programming skills. Throughout this thesis we present various contributions addressing these challenges. Although learning from demonstration is a well-established approach to teaching robots without explicit programming, most approaches in literature for learning manipulation actions use kinesthetic training as these actions require thorough knowledge of the interactions between the robot and the object which can be learned directly by kinesthetic teaching since no abstraction is needed. In addition, in most current imitation learning approaches mobile platforms are not considered. In this thesis we present a novel approach to learn joint robot base and end-effector action models from observing demonstrations carried out by a human teacher. To achieve this we adapt trajectory data obtained from RGBD recordings of the human teacher performing the action to the capabilities of the robot. We formulate a graph optimization problem that the links the observed human trajectories with robot grasping capabilities and kinematic constraints between co-occurring base and gripper poses, allowing us to generate robot suitable trajectories. In a next step, we do not just learn individual manipulation actions, but to combine several actions into one task. Challenges arise from handling ambiguous goals and generalizing the task to new settings. We present an approach to learn both representations together from the same teacher demonstrations, one for individual mobile manipulation actions as described above, and one for the representation of the overall task intent. We leverage a framework based on Monte Carlo tree search to compute sequences of feasible actions imitating the teacher intention in new settings without explicitly specifying a task goal. In this way, we can reproduce complex tasks while ensuring that all composing actions are executable in the given setting. The mobile manipulation models mentioned above are encoded as dynamic systems to facilitate interaction with objects in world coordinates. However, this poses the challenge of translating kinematic constraints of the robot to the task space and including them in the action models. In this thesis we propose to couple robot base and end-effector motions generated by arbitrary dynamical systems by modulating the base velocity, while respecting the robots kinematic design. To this end we learn an approximation of the inverse reachability in closed form and implement the coupling as an obstacle avoidance problem. Furthermore, in this work we address the challenge of imitating manipulation actions, the execution of which depends on additional non-geometric quantities as, e.g., contact forces when handing over an object or measured liquid height, while pouring water into a cup. We suggest an approach to include this additional information in form of measured features directly into the action models. These features are recorded in the demonstrations alongside the geometric route of the manipulation action and their correlation is captured in a Gaussian Mixture Model that parametrizes the dynamic system used. This enables us to also couple the motion's geometric trajectory to the perceived features in the scene during action imitation. All the above described contributions were evaluated extensively in real world robot experiments on a PR2 system and a KUKA Iiwa Robot Arm


Developing a Mobile Manipulation System to Handle Unknown and Unstructured Objects

Developing a Mobile Manipulation System to Handle Unknown and Unstructured Objects
Author: Abdulrahman Al-Shanoon
Publisher:
Total Pages: 0
Release: 2021
Genre:
ISBN:

Download Developing a Mobile Manipulation System to Handle Unknown and Unstructured Objects Book in PDF, ePub and Kindle

The exceptional human's ability to interact with unknown objects based on minimal prior experience is a permanent inspiration to the field of robotic manipulation. The recent revolution in industrial and service robots demands high-autonomy and intelligent mobile-manipulators. The goal of the thesis is to develop an autonomous mobile robotic manipulation system that can handle unknown and unstructured objects with the least training and human involvement. First, an end-to-end vision-based mobile manipulation architecture with minimal training using synthetic datasets is proposed in this thesis. The system includes: 1) effective training strategy of a perception network for object pose estimation, 2) the result is utilized as sensing feedback to integrate into a visual servoing system to achieve autonomous mobile manipulation. Experimental findings from simulations and real-world settings showed the efficiency of using computer-generated datasets, that can be generalized to the physical mobile-manipulator task. The model of the presented robot is experimentally verified and discussed. Second, a challenging robotic manipulation scenario of unknown-adjacent objects is addressed in this thesis by using a scalable self-supervised system that can learn grasping control strategies for unknown objects based on limited knowledge and simple sample objects. The developed learning scheme can be beneficial to both generalization and transferability without requiring any additional training or prior object awareness. Finally, an end-to-end self-learning framework is proposed to learn manipulating policies for challenging scenarios based on minimal training time and raw experience. The proposed model learns from scratch, from visual observations to sequential decision-making, manipulating actions and generalizes to unknown scenarios. The agent comprehends a sequence of manipulations that purposely lead to successful grasps. Results of the experiments demonstrated the effectiveness of the learning between manipulating actions, in which the grasping success rate has dramatically increased. The proposed system is successfully experimented and validated in simulations and real-world settings.


Robot Learning from Human Demonstration

Robot Learning from Human Demonstration
Author: Sonia Dechter
Publisher: Springer Nature
Total Pages: 109
Release: 2022-06-01
Genre: Computers
ISBN: 3031015703

Download Robot Learning from Human Demonstration Book in PDF, ePub and Kindle

Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. The field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 gives a brief survey of the psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 is devoted to interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects in this domain.


Learning and Execution of Object Manipulation Tasks on Humanoid Robots

Learning and Execution of Object Manipulation Tasks on Humanoid Robots
Author: Waechter, Mirko
Publisher: KIT Scientific Publishing
Total Pages: 258
Release: 2018-03-21
Genre: Electronic computers. Computer science
ISBN: 3731507498

Download Learning and Execution of Object Manipulation Tasks on Humanoid Robots Book in PDF, ePub and Kindle

Equipping robots with complex capabilities still requires a great amount of effort. In this work, a novel approach is proposed to understand, to represent and to execute object manipulation tasks learned from observation by combining methods of data analysis, graphical modeling and artificial intelligence. Employing this approach enables robots to reason about how to solve tasks in dynamic environments and to adapt to unseen situations.


Robot Programming by Demonstration

Robot Programming by Demonstration
Author: Sylvain Calinon
Publisher: EPFL Press
Total Pages: 248
Release: 2009-08-24
Genre: Computers
ISBN: 9781439808672

Download Robot Programming by Demonstration Book in PDF, ePub and Kindle

Recent advances in RbD have identified a number of key issues for ensuring a generic approach to the transfer of skills across various agents and contexts. This book focuses on the two generic questions of what to imitate and how to imitate and proposes active teaching methods.


Understanding and Learning Robotic Manipulation Skills from Humans

Understanding and Learning Robotic Manipulation Skills from Humans
Author: Elena Galbally Herrero
Publisher:
Total Pages: 0
Release: 2022
Genre: Machine learning
ISBN:

Download Understanding and Learning Robotic Manipulation Skills from Humans Book in PDF, ePub and Kindle

Humans are constantly learning new skills and improving upon their existing abilities. In particular, when it comes to manipulating objects, humans are extremely effective at generalizing to new scenarios and using physical compliance to our advantage. Compliance is key to generating robust behaviors by reducing the need to rely on precise trajectories. Programming robots through predefined trajectories has been highly successful for performing tasks in structured environments, such as assembly lines. However, such an approach is not viable for real-time operations in real-world scenarios. Inspired by humans, we propose to program robots at a higher level of abstraction by using primitives that leverage contact information and compliant strategies. Compliance increases robustness to uncertainty in the environment and primitives provide us with atomic actions that can be reused to avoid coding new tasks from scratch. We have developed a framework that allows us to: (i) collect and segment human data from multiple contact-rich tasks through direct or haptic demonstrations, (ii) analyze this data and extract the human's compliant strategy, and (iii) encode the strategy into robot primitives using task-level controllers. During autonomous task execution, haptic interfaces enable human real-time intervention and additional data collection for recovery from failures. At the core of this framework is the notion of a compliant frame - an origin and three directions in space along and about which we control motion and compliance. The compliant frame is attached to the object being manipulated and together with the desired task parameters defines a primitive. Task parameters include desired forces, moments, positions, and orientations. This task specification provides a physically meaningful, low-dimensional, and robot-independent representation. This thesis presents a novel framework for learning manipulation skills from demonstration data. Leveraging compliant frames enables us to understand human actions and extract strategies that generalize across objects and robots. The framework was extensively validated through simulation and hardware experiments, including five real-world construction tasks.


Interactive Task Learning

Interactive Task Learning
Author: Kevin A. Gluck
Publisher: MIT Press
Total Pages: 355
Release: 2019-08-16
Genre: Computers
ISBN: 0262349434

Download Interactive Task Learning Book in PDF, ePub and Kindle

Experts from a range of disciplines explore how humans and artificial agents can quickly learn completely new tasks through natural interactions with each other. Humans are not limited to a fixed set of innate or preprogrammed tasks. We learn quickly through language and other forms of natural interaction, and we improve our performance and teach others what we have learned. Understanding the mechanisms that underlie the acquisition of new tasks through natural interaction is an ongoing challenge. Advances in artificial intelligence, cognitive science, and robotics are leading us to future systems with human-like capabilities. A huge gap exists, however, between the highly specialized niche capabilities of current machine learning systems and the generality, flexibility, and in situ robustness of human instruction and learning. Drawing on expertise from multiple disciplines, this Strüngmann Forum Report explores how humans and artificial agents can quickly learn completely new tasks through natural interactions with each other. The contributors consider functional knowledge requirements, the ontology of interactive task learning, and the representation of task knowledge at multiple levels of abstraction. They explore natural forms of interactions among humans as well as the use of interaction to teach robots and software agents new tasks in complex, dynamic environments. They discuss research challenges and opportunities, including ethical considerations, and make proposals to further understanding of interactive task learning and create new capabilities in assistive robotics, healthcare, education, training, and gaming. Contributors Tony Belpaeme, Katrien Beuls, Maya Cakmak, Joyce Y. Chai, Franklin Chang, Ropafadzo Denga, Marc Destefano, Mark d'Inverno, Kenneth D. Forbus, Simon Garrod, Kevin A. Gluck, Wayne D. Gray, James Kirk, Kenneth R. Koedinger, Parisa Kordjamshidi, John E. Laird, Christian Lebiere, Stephen C. Levinson, Elena Lieven, John K. Lindstedt, Aaron Mininger, Tom Mitchell, Shiwali Mohan, Ana Paiva, Katerina Pastra, Peter Pirolli, Roussell Rahman, Charles Rich, Katharina J. Rohlfing, Paul S. Rosenbloom, Nele Russwinkel, Dario D. Salvucci, Matthew-Donald D. Sangster, Matthias Scheutz, Julie A. Shah, Candace L. Sidner, Catherine Sibert, Michael Spranger, Luc Steels, Suzanne Stevenson, Terrence C. Stewart, Arthur Still, Andrea Stocco, Niels Taatgen, Andrea L. Thomaz, J. Gregory Trafton, Han L. J. van der Maas, Paul Van Eecke, Kurt VanLehn, Anna-Lisa Vollmer, Janet Wiles, Robert E. Wray III, Matthew Yee-King


IROS

IROS
Author:
Publisher:
Total Pages: 654
Release: 2001
Genre: Robotics
ISBN:

Download IROS Book in PDF, ePub and Kindle


Learning Mobile Manipulation

Learning Mobile Manipulation
Author: David Joseph Watkins
Publisher:
Total Pages: 0
Release: 2022
Genre:
ISBN:

Download Learning Mobile Manipulation Book in PDF, ePub and Kindle

Providing mobile robots with the ability to manipulate objects has, despite decades of research, remained a challenging problem. The problem is approachable in constrained environments where there is ample prior knowledge of the environment layout and manipulatable objects. The challenge is in building systems that scale beyond specific situational instances and gracefully operate in novel conditions. In the past, researchers used heuristic and simple rule-based strategies to accomplish tasks such as scene segmentation or reasoning about occlusion. These heuristic strategies work in constrained environments where a roboticist can make simplifying assumptions about everything from the geometries of the objects to be interacted with, level of clutter, camera position, lighting, and a myriad of other relevant variables. The work in this thesis will demonstrate how to build a system for robotic mobile manipulation that is robust to changes in these variables. This robustness will be enabled by recent simultaneous advances in the fields of big data, deep learning, and simulation.


Constructing Mobile Manipulation Behaviors Using Expert Interfaces and Autonomous Robot Learning

Constructing Mobile Manipulation Behaviors Using Expert Interfaces and Autonomous Robot Learning
Author: Hai Dai Nguyen
Publisher:
Total Pages:
Release: 2013
Genre: End-user computing
ISBN:

Download Constructing Mobile Manipulation Behaviors Using Expert Interfaces and Autonomous Robot Learning Book in PDF, ePub and Kindle

With current state-of-the-art approaches, development of a single mobile manipulation capability can be a labor-intensive process that presents an impediment to the creation of general purpose household robots. At the same time, we expect that involving a larger community of non-roboticists can accelerate the creation of new novel behaviors. We introduce the use of a software authoring environment called ROS Commander (ROSCo) allowing end-users to create, refine, and reuse robot behaviors with complexity similar to those currently created by roboticists. Akin to Photoshop, which provides end-users with interfaces for advanced computer vision algorithms, our environment provides interfaces to mobile manipulation algorithmic building blocks that can be combined and configured to suit the demands of new tasks and their variations. As our system can be more demanding of users than alternatives such as using kinesthetic guidance or learning from demonstration, we performed a user study with 11 able-bodied participants and one person with quadriplegia to determine whether computer literate non-roboticists will be able to learn to use our tool. In our study, all participants were able to successfully construct functional behaviors after being trained. Furthermore, participants were able to produce behaviors that demonstrated a variety of creative manipulation strategies, showing the power of enabling end-users to author robot behaviors. Additionally, we introduce how using autonomous robot learning, where the robot captures its own training data, can complement human authoring of behaviors by freeing users from the repetitive task of capturing data for learning. By taking advantage of the robot's embodiment, our method creates classifiers that predict using visual appearances 3D locations on home mechanisms where user constructed behaviors will succeed. With active learning, we show that such classifiers can be learned using a small number of examples. We also show that this learning system works with behaviors constructed by non-roboticists in our user study. As far as we know, this is the first instance of perception learning with behaviors not hand-crafted by roboticists.