Stanford Doggo robot acrobatically traverses tough terrain

Putting their own twist on robots that amble through complicated landscapes, the Stanford Student Robotics club’s Extreme Mobility team at Stanford University has developed a four-legged robot that is not only capable of performing acrobatic tricks and traversing challenging terrain, but is also designed with reproducibility in mind. Anyone who wants their own version of the robot, dubbed Stanford Doggo, can consult comprehensive plans, code and a supply list that the students have made freely available online.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Nathan Kau, ’20, a mechanical engineering major and lead for Extreme Mobility. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

Whereas other similar robots can cost tens or hundreds of thousands of dollars and require customized parts, the Extreme Mobility students estimate the cost of Stanford Doggo at less than $3,000 — including manufacturing and shipping costs. Nearly all the components can be bought as-is online. The Stanford students said they hope the accessibility of these resources inspires a community of Stanford Doggo makers and researchers who develop innovative and meaningful spinoffs from their work.

Stanford Doggo can already walk, trot, dance, hop, jump, and perform the occasional backflip. The students are working on a larger version of their creation — which is currently about the size of a beagle — but they will take a short break to present Stanford Doggo at the International Conference on Robotics and Automation (ICRA) on May 21 in Montreal.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

A hop, a jump and a backflip

In order to make Stanford Doggo replicable, the students built it from scratch. This meant spending a lot of time researching easily attainable supplies and testing each part as they made it, without relying on simulations.

“It’s been about two years since we first had the idea to make a quadruped. We’ve definitely made several prototypes before we actually started working on this iteration of the dog,” said Natalie Ferrante, Class of 2019, a mechanical engineering co-terminal student and Extreme Mobility Team member. “It was very exciting the first time we got him to walk.”

Stanford Doggo’s first steps were admittedly toddling, but now the robot can maintain a consistent gait and desired trajectory, even as it encounters different terrains. It does this with the help of motors that sense external forces on the robot and determine how much force and torque each leg should apply in response. These motors recompute at 8,000 times a second and are essential to the robot’s signature dance: a bouncy boogie that hides the fact that it has no springs.

Instead, the motors act like a system of virtual springs, smoothly but perkily rebounding the robot into proper form whenever they sense it’s out of position.

Among the skills and tricks the team added to the robot’s repertoire, the students were exceptionally surprised at its jumping prowess. Running Stanford Doggo through its paces one (very) early morning in the lab, the team realized it was effortlessly popping up 2 feet in the air. By pushing the limits of the robot’s software, Stanford Doggo was able to jump 3, then 3½ feet off the ground.

“This was when we realized that the robot was, in some respects, higher performing than other quadruped robots used in research, even though it was really low cost,” recalled Kau.

Since then, the students have taught Stanford Doggo to do a backflip – but always on padding to allow for rapid trial and error experimentation.

Stanford Doggo robot acrobatically traverses tough terrain

Stanford students have developed Doggo, a relatively low-cost four-legged robot that can trot, jump and flip. (Image credit: Kurt Hickman)

What will Stanford Doggo do next?

If these students have it their way, the future of Stanford Doggo in the hands of the masses.

“We’re hoping to provide a baseline system that anyone could build,” said Patrick Slade, graduate student in aeronautics and astronautics and mentor for Extreme Mobility. “Say, for example, you wanted to work on search and rescue; you could outfit it with sensors and write code on top of ours that would let it climb rock piles or excavate through caves. Or maybe it’s picking up stuff with an arm or carrying a package.”

That’s not to say they aren’t continuing their own work. Extreme Mobility is collaborating with the Robotic Exploration Lab of Zachary Manchester, assistant professor of aeronautics and astronautics at Stanford, to test new control systems on a second Stanford Doggo. The team has also finished constructing a robot twice the size of Stanford Doggo that can carry about 6 kilograms of equipment. Its name is Stanford Woofer.

Note: This article is republished from the Stanford University News Service.

Neural network helps autonomous car learn to handle the unknown


Autonomous Vehicles

Shelley, Stanford’s autonomous Audi TTS, performs at Thunderhill Raceway Park. (Credit: Kurt Hickman)

Researchers at Stanford University have developed a new way of controlling autonomous cars that integrates prior driving experiences – a system that will help the cars perform more safely in extreme and unknown circumstances. Tested at the limits of friction on a racetrack using Niki, Stanford’s autonomous Volkswagen GTI, and Shelley, Stanford’s autonomous Audi TTS, the system performed about as well as an existing autonomous control system and an experienced racecar driver.

“Our work is motivated by safety, and we want autonomous vehicles to work in many scenarios, from normal driving on high-friction asphalt to fast, low-friction driving in ice and snow,” said Nathan Spielberg, a graduate student in mechanical engineering at Stanford and lead author of the paper about this research, published March 27 in Science Robotics. “We want our algorithms to be as good as the best skilled drivers—and, hopefully, better.”

While current autonomous cars might rely on in-the-moment evaluations of their environment, the control system these researchers designed incorporates data from recent maneuvers and past driving experiences – including trips Niki took around an icy test track near the Arctic Circle. Its ability to learn from the past could prove particularly powerful, given the abundance of autonomous car data researchers are producing in the process of developing these vehicles.

Physics and learning with a neural network

Control systems for autonomous cars need access to information about the available road-tire friction. This information dictates the limits of how hard the car can brake, accelerate and steer in order to stay on the road in critical emergency scenarios. If engineers want to safely push an autonomous car to its limits, such as having it plan an emergency maneuver on ice, they have to provide it with details, like the road-tire friction, in advance. This is difficult in the real world where friction is variable and often is difficult to predict.

To develop a more flexible, responsive control system, the researchers built a neural network that integrates data from past driving experiences at Thunderhill Raceway in Willows, California, and a winter test facility with foundational knowledge provided by 200,000 physics-based trajectories.

This video above shows the neural network controller implemented on an automated autonomous Volkswagen GTI tested at the limits of handling (the ability of a vehicle to maneuver a track or road without skidding out of control) at Thunderhill Raceway.

“With the techniques available today, you often have to choose between data-driven methods and approaches grounded in fundamental physics,” said J. Christian Gerdes, professor of mechanical engineering and senior author of the paper. “We think the path forward is to blend these approaches in order to harness their individual strengths. Physics can provide insight into structuring and validating neural network models that, in turn, can leverage massive amounts of data.”

The group ran comparison tests for their new system at Thunderhill Raceway. First, Shelley sped around controlled by the physics-based autonomous system, pre-loaded with set information about the course and conditions. When compared on the same course during 10 consecutive trials, Shelley and a skilled amateur driver generated comparable lap times. Then, the researchers loaded Niki with their new neural network system. The car performed similarly running both the learned and physics-based systems, even though the neural network lacked explicit information about road friction.

In simulated tests, the neural network system outperformed the physics-based system in both high-friction and low-friction scenarios. It did particularly well in scenarios that mixed those two conditions.

Simple feedforward-feedback control structure used for path tracking on an automated vehicle. (Credit: Stanford University)

An abundance of data

The results were encouraging, but the researchers stress that their neural network system does not perform well in conditions outside the ones it has experienced. They say as autonomous cars generate additional data to train their network, the cars should be able to handle a wider range of conditions.

“With so many self-driving cars on the roads and in development, there is an abundance of data being generated from all kinds of driving scenarios,” Spielberg said. “We wanted to build a neural network because there should be some way to make use of that data. If we can develop vehicles that have seen thousands of times more interactions than we have, we can hopefully make them safer.”

Editor’s Note: This article was republished from Stanford University.

The post Neural network helps autonomous car learn to handle the unknown appeared first on The Robot Report.