Primary Research Goals
- Identify effective divisions, interfaces and control rates of the locomotion control and planning hierarchy, scaling from continuous passive dynamics to slow high level behavior planning.
- Identify low-level objectives, such as limiting peak forces, that drive animal physiology and can guide the design of physical robots and their controllers.
- Identify the roles of passive dynamics and control in legged locomotion, as it applies to animals and to robots.
- Integrate physics first, low-level objectives into reinforcement learning to generate control policies that achieve efficient, robust locomotion.
- Identify computationally efficient and dynamically rich models for multistep locomotion planning
We seek to discover learning methods to produce real dynamic behavior using first principles of legged locomotion. Our process entails integrating physics first, low-level objectives into reinforcement learning to generate control policies that achieve efficient, robust locomotion.
- Feedback Control For Cassie With Deep Reinforcement Learning
- Learning Locomotion Skills for Cassie: Iterative Design and Sim-to-Real, [YouTube]
- Sim-to-Real Learning of All Common Bipedal Gaits via Periodic Reward Composition, [YouTube]
We explore the problem of how to effectively and consistently transfer learned policies from simulation to the real world, without loss of performance or robustness. We seek to identify the factors and best practices to achieve reliable sim to real transfer.
- Learning Memory-Based Control for Human-Scale Bipedal Locomotiont, [YouTube]
- Learning to Walk without Dynamics Randomization
Hierarchical Design for Learned Systems
We are investigating how best to structure a control hierarchy, mixing learned components with classical methods
We seek to investigate how to properly guide learned policies towards effective states by structuring our action space and exploration, speeding up learning and producing better action output.