U.S. Robotics Roadmap calls for white papers for revision

U.S. Robotics Roadmap calls for white papers for revision

The U.S. National Robotics Roadmap was first created 10 years ago. Since then, government agencies, universities, and companies have used it as a reference for where robotics is going. The first roadmap was published in 2009 and then revised in 2013 and 2016. The objective is to publish the fourth version of the roadmap by summer 2020.

The team developing the U.S. National Robotics Roadmap has put out a call to engage about 150 to 200 people from academia and industry to ensure that it is representative of the robotics community’s view of the future. The roadmap will cover manufacturing, service, medical, first-responder, and space robotics.

The revised roadmap will also include considerations related to ethics and workforce. It will cover emerging applications, the key challenges to progress, and what research and development is needed.

Join community workshops

Three one-and-a-half-day workshops will be organized for community input to the roadmap. The workshops will take place as follows:

  • Sept. 11-12 in Chicago (organized by Nancy Amato, co-director of the Parasol Lab at Texas A&M University and head of the Department of Computer Science at the University of Ilinois at Urbana-Champaign)
  • Oct. 17-18 in Los Angeles (organized by Maja Mataric, Chan Soon-Shiong distinguished professor of computer science, neuroscience, and pediatrics at the University of Southern California)
  • Nov. 15-16 in Lowell, Mass. (organized by Holly Yanco, director of the NERVE Center at the University of Massachusetts Lowell)

Participation in these workshops will be by invitation only. To participate, please submit a white paper/position statement of a maximum length of 1.5 pages. What are key use cases for robotics in a five-to-10-year perspective, what are key limitations, and what R&D is needed in that time frame? The white paper can address all three aspects or focus on one of them. The white paper must include the following information:

  • Name, affiliation, and e-mail address
  • A position statement (1.5 pages max)

Please submit the white paper as regular text or as a PDF file. Statements that are too long will be ignored. Position papers that only focus on current research are not appropriate. A white paper should present a future vision and not merely discuss state of the art.

White papers should be submitted by end of the day Aug. 15, 2019, to roadmapping@robotics-vo.org. Late submissions may not be considered. We will evaluate submitted white papers by Aug. 18 and select people for the workshops by Aug. 19.

Roadmap revision timeline

The workshop reports will be used as the basis for a synthesis of a new roadmap. The nominal timeline is:

  • August 2019: Call for white papers
  • September – November 2019: Workshops
  • December 2019: Workshops reports finalized
  • January 2020: Synthesis meeting at UC San Diego
  • February 2020: Publish draft roadmap for community feedback
  • April 2020: Revision of roadmap based on community feedback
  • May 2020: Finalize roadmap with graphics design
  • July 2020: Publish roadmap

If you have any questions about the process, the scope, etc., please send e-mail to Henrik I Christensen at hichristensen@eng.ucsd.edu.

U.S. Robotics Roadmap calls for reviewers

Henrik I Christensen spoke at the Robotics Summit & Expo in Boston.

Editor’s note: Christensen, Qualcomm Chancellor’s Chair of Robot Systems at the University of California San Diego and co-founder of Robust AI, delivered a keynote address at last month’s Robotics Summit & Expo, produced by The Robot Report.

The post U.S. Robotics Roadmap calls for white papers for revision appeared first on The Robot Report.

Roach-inspired robot shares insect’s speed, toughness

If the sight of a skittering bug makes you squirm, you may want to look away — a new insect-sized robot created by researchers at the University of California, Berkeley, can scurry across the floor at nearly the speed of a darting cockroach. And it’s nearly as hardy as a roach is. Try to squash this robot under your foot, and more than likely, it will just keep going.

“Most of the robots at this particular small scale are very fragile. If you step on them, you pretty much destroy the robot,” said Liwei Lin, a professor of mechanical engineering at UC Berkeley and senior author of a new study that describes the robot. “We found that if we put weight on our robot, it still more or less functions.”

Small-scale robots like these could be advantageous in search-and-rescue missions, squeezing and squishing into places where dogs or humans can’t fit, or where it may be too dangerous for them to go, said Yichuan Wu, first author of the paper, who completed the work as a graduate student in mechanical engineering at UC Berkeley through the Tsinghua-Berkeley Shenzhen Institute partnership.

“For example, if an earthquake happens, it’s very hard for the big machines, or the big dogs, to find life underneath debris, so that’s why we need a small-sized robot that is agile and robust,” said Wu, who is now an assistant professor at the University of Electronic Science and Technology of China.

The study appears this week in the journal Science Robotics.

PVDF provides roach-like characteristics

The robot, which is about the size of a large postage stamp, is made of a thin sheet of a piezoelectric material called polyvinylidene fluoride, or PVDF. Piezoelectric materials are unique, in that applying electric voltage to them causes the materials to expand or contract.

UC Berkeley roach robot

The robot is built of a layered material that bends and straightens when AC voltage is applied, causing it to spring forward in a “leapfrogging” motion. Credit: UC Berkeley video and photo by Stephen McNally

The researchers coated the PVDF in a layer of an elastic polymer, which causes the entire sheet to bend, instead of to expand or contract. They then added a front leg so that, as the material bends and straightens under an electric field, the oscillations propel the device forward in a “leapfrogging” motion.

The resulting robot may be simple to look at, but it has some remarkable abilities. It can sail along the ground at a speed of 20 body lengths per second, a rate comparable to that of a roach and reported to be the fastest pace among insect-scale robots. It can zip through tubes, climb small slopes, and carry small loads, such as a peanut.

Perhaps most impressively, the robot, which weighs less than one tenth of a gram, can withstand a weight of around 60kg [132 lb.] — about the weight of an average human — which is approximately 1 million times the weight of the robot.

“People may have experienced that, if you step on the cockroach, you may have to grind it up a little bit, otherwise the cockroach may still survive and run away,” Lin said. “Somebody stepping on our robot is applying an extraordinarily large weight, but [the robot] still works, it still functions. So, in that particular sense, it’s very similar to a cockroach.”

The robot is currently “tethered” to a thin wire that carries an electric voltage that drives the oscillations. The team is experimenting with adding a battery so the roach robot can roam independently. They are also working to add gas sensors and are improving the design of the robot so it can be steered around obstacles.

Co-authors of the paper include Justin K. Yim, Zhichun Shao, Mingjing Qi, Junwen Zhong, Zihao Luo, Ronald S. Fearing and Robert J. Full of UC Berkeley, Xiaojun Yan of Beihang University and Jiaming Liang, Min Zhang and Xiaohao Wang of Tsinghua University.

This work is supported in part by the Berkeley Sensor and Actuator Center, an Industry-University Cooperation Research Center.

Editor’s note: This article republished from the University of California, Berkeley.

The post Roach-inspired robot shares insect’s speed, toughness appeared first on The Robot Report.

Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems

SAN JOSE, Calif. — Velodyne Lidar Inc. today announced that it has acquired Mapper.ai’s mapping and localization software, as well as its intellectual property assets. Velodyne said that Mapper’s technology will enable it to accelerate development of the Vella software that establishes its directional view Velarray lidar sensor.

The Velarray is the first solid-state Velodyne lidar sensor that is embeddable and fits behind a windshield, said Velodyne, which described it as “an integral component for superior, more effective advanced driver assistance systems” (ADAS).

The company provides lidar sensors for autonomous vehicles and driver assistance. David Hall, Velodyne’s founder and CEO invented real-time surround-view lidar systems in 2005 as part of Velodyne Acoustics. His invention revolutionized perception and autonomy for automotive, new mobility, mapping, robotics, and security.

Velodyne said its high-performance product line includes a broad range of sensors, including the cost-effective Puck, the versatile Ultra Puck, and the autonomy-advancing Alpha Puck.

Mapper.ai staffers to join Velodyne

Mapper’s entire leadership and engineering teams will join Velodyne, bolstering the company’s large and growing software-development group. The talent from Mapper.ai will augment the current team of engineers working on Vella software, which will accelerate Velodyne’s production of ADAS systems.

Velodyne claimed its technology will allow customers to unlock advanced capabilities for ADAS features, including pedestrian and bicycle avoidance, Lane Keep Assistance (LKA), Automatic Emergency Braking (AEB), Adaptive Cruise Control (ACC), and Traffic Jam Assist (TJA).

“By adding Vella software to our broad portfolio of lidar technology, Velodyne is poised to revolutionize ADAS performance and safety,” stated Anand Gopalan, chief technology officer at Velodyne. “Expanding our team to develop Vella is a giant step towards achieving our goal of mass-producing an ADAS solution that dramatically improves roadway safety.”

“Mapper technology gives us access to some key algorithmic elements and accelerates our development timeline,” Gopalan added. “Together, our sensors and software will allow powerful lidar-based safety solutions to be available on every vehicle.”

Mapper.ai to contribute to Velodyne software

Mapper.ai developers will work on the Vella software for the Velarray sensor. Source: Velodyne Lidar

“Velodyne has both created the market for high-fidelity automotive lidar and established itself as the leader. We have been Velodyne customers for years and have already integrated their lidar sensors into easily deployable solutions for scalable high-definition mapping,” said Dr. Nikhil Naikal, founder and CEO of Mapper, who is joining Velodyne. “We are excited to use our technology to speed up Velodyne’s lidar-centric software approach to ADAS.”

In addition to ADAS, Velodyne said it will incorporate Mapper technology into lidar-centric solutions for other emerging applications, including autonomous vehicles, last-mile delivery services, security, smart cities, smart agriculture, robotics, and unmanned aerial vehicles.

The post Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems appeared first on The Robot Report.

Sea Machines Robotics to demonstrate autonomous spill response

Sea Machines Robotics to demonstrate autonomous spill response

Source: Sea Machines Robotics

BOSTON — Sea Machines Robotics Inc. this week said it has entered into a cooperative agreement with the U.S. Department of Transportation’s Maritime Administration to demonstrate the ability of its autonomous technology in increasing the safety, response time and productivity of marine oil-spill response operations.

Sea Machines was founded in 2015 and claimed to be “the leader in pioneering autonomous control and advanced perception systems for the marine industries.” The company builds software and systems to increase the safety, efficiency, and performance of ships, workboats, and commercial vessels worldwide.

The U.S. Maritime Administration (MARAD) is an agency of the U.S. Department of Transportation that promotes waterborne transportation and its integration with other segments of the transportation system.

Preparing for oil-spill exercise

To make the on-water exercises possible, Sea Machines will install its SM300 autonomous-command system aboard a MARCO skimming vessel owned by Marine Spill Response Corp. (MSRC), a not-for-profit, U.S. Coast Guard-classified oil spill removal organization (OSRO). MSRC was formed with the Marine Preservation Association to offer oil-spill response services in accordance with the Oil Pollution Act of 1990.

Sea Machines plans to train MSRC personnel to operate its system. Then, on Aug. 21, Sea Machines and MSRC will execute simulated oil-spill recovery exercises in the harbor of Portland, Maine, before an audience of government, naval, international, environmental, and industry partners.

The response skimming vessel is manufactured by Seattle-based Kvichak Marine Industries and is equipped with a MARCO filter belt skimmer to recover oil from the surface of the water. This vessel typically operates in coastal or near-shore areas. Once installed, the SM300 will give the MSRC vessel the following new capabilities:

  • Remote autonomous control from an onshore location or secondary vessel,
  • ENC-based mission planning,
  • Autonomous waypoint tracking,
  • Autonomous grid line tracking,
  • Collaborative autonomy for multi-vessel operations,
  • Wireless remote payload control to deploy onboard boom and other response equipment, and
  • Obstacle detection and collision avoidance.

Round-the-clock response

In addition, Sea Machines said, it enables minimally manned and unmanned autonomous maritime operations. Such configurations allow operators to respond to spill events 24/7 depending on recovery conditions, even when crews are unavailable or restricted, the company said. These configurations also reduce or eliminate exposure of crewmembers to toxic fumes and other safety hazards.

“Autonomous technology has the power to not only help prevent vessel accidents that can lead to spills, but can also facilitate better preparedness; aid in safer, efficient, and effective cleanup,” said CEO Michael G. Johnson, CEO of Sea Machines. “We look forward to working closely with MARAD and MSRC in these industry-modernizing exercises.”

“Our No. 1 priority is the safety of our personnel at MSRC,” said John Swift, vice president at MSRC. “The ability to use autonomous technology — allowing response operations to continue in an environment where their safety may be at risk — furthers our mission of response preparedness.”

Sea Machines promises rapid ROI for multiple vessels

Sea Machines’ SM Series of products, which includes the SM300 and SM200, provides marine operators a new era of task-driven, computer-guided vessel control, bringing advanced autonomy within reach for small- and large-scale operations. SM products can be installed aboard existing or new-build commercial vessels with return on investment typically seen within a year.

In addition, Sea Machines has received funding from Toyota AI Ventures.

Sea Machines is also a leading developer of advanced perception and navigation assistance technology for a range of vessel types, including container ships. The company is currently testing its perception and situational awareness technology aboard one of A.P. Moller-Maersk’s new-build ice-class container ships.

The post Sea Machines Robotics to demonstrate autonomous spill response appeared first on The Robot Report.

Perrone Robotics begins pilot of first autonomous public shuttle in Virginia

ALBEMARLE COUNTY, Va. — Perrone Robotics Inc., in partnership with Albemarle County and JAUNT Inc., last week announced that Virginia’s first public autonomous shuttle service began pilot operations in Crozet, Va.

The shuttle service, called AVNU for “Autonomous Vehicle, Neighborhood Use,” is driven by Perrone Robotics’ TONY (TO Navigate You) autonomous shuttle technology applied to a Polaris Industries Inc. GEM shuttle. Perrone Robotics said its Neighborhood Electric Vehicle (NEV) shuttle has industry-leading perception and guidance capabilities and will drive fully autonomously (with safety driver) through county neighborhoods and downtown areas on public roads, navigating vehicle, and pedestrian traffic. The base GEM vehicle meets federal safety standards for vehicles in its class.

“With over 33,000 autonomous miles traveled using our technology, TONY-powered vehicles bring the highest level of autonomy available in the world today to NEV shuttles,” said Paul Perrone, founder/CEO of Perrone Robotics. “We are deploying an AV platform that has been carefully refined since 2003, applied in automotive and industrial autonomy spaces, and now being leveraged to bring last-mile services to communities such as those here in Albemarle County, Va. What we deliver is a platform that operates shuttles autonomously in complex environments with roundabouts, merges, and pedestrian-dense areas.”

The TONY-based AVNU shuttle will offer riders trips within local residential developments, trips to connect neighborhoods, and connections from these areas to the downtown business district.

Polaris GEM partner of Perrone Robotics

Perrone Robotics provides autonomy for Polaris GEM shuttles. Source: Polaris Industries

More routes to come for Perrone AVNU shuttles

After the pilot phase, additional routes will be demonstrate Albemarle County development initiatives such as connector services for satellite parking. They will also connection with JAUNT‘s commuter shuttles, also targeted for autonomous operation with TONY technology.

“We have seen other solutions out there that require extensive manual operation for large portions of the course and very low speeds for traversal of tricky sections,” noted Perrone.  “We surpass these efforts by using our innovative, super-efficient, and completely novel and patented autonomous engine, MAX®, that has over 16 years of engineering and over 33,000 on and off-road miles behind it. We also use AI, but as a tool, not a crutch.”

“It is with great pleasure that we launch the pilot of the next generation of transportation — autonomous neighborhood shuttles — here in Crozet,” said Ann MallekWhite Hall District Supervisor. “Albemarle County is so proud to support our home town company, Perrone Robotics, and work with our transit provider JAUNT, through Smart Mobility Inc., to bring this project to fruition.”

Perrone said that AVNU is electrically powered, so the shuttle is quiet and non-polluting, and it uses solar panels to significantly extend system range. AVNU has been extensively tested by Perrone Robotics, and testing data has been evaluated by Albemarle County and JAUNT prior to launch.

The post Perrone Robotics begins pilot of first autonomous public shuttle in Virginia appeared first on The Robot Report.

Brain Corp Europe opens in Amsterdam


A BrainOS-powered autonomous floor scrubber. | Credit: Brain Corp

San Diego-based Brain Corp, the Softbank-backed developer of autonomous navigation systems, has opened its European headquarters in Amsterdam. The reason for the expansion is two-fold: it helps Brain better support partners who do business in Europe, and it helps Brain find additional engineering talent.

“Amsterdam is a fantastic gateway to Europe and has one of the largest airports in Europe,” Sandy Agnos, Brain’s Director of Global Business Development, told The Robot Report. “It’s very business and tech friendly. It is the second-fastest-growing tech community, talent-wise, in Europe.”

Brain hired Michel Spruijt to lead Brain Corp Europe. He will be tasked with driving sales of BrainOS-powered machines, providing partner support, and overseeing general operations throughout Europe. Agnos said Brain was impressed by Spruijt’s previous experience growing an office from “a few employees to over 100 was impressive to us.”

“Under Michel Spruijt’s guidance, our vision of a world where the lives of people are made safer, easier, more productive, and more fulfilling with the help of robots will extend into Europe,” said Eugene Izhikevich, Brain Corp’s Co-Founder and CEO.

Agnos said there will initially be about 12 employees at Brain Corp Europe who focus mostly on service and support. She added that Brain is recruiting software engineering talent and will continue to grow the Amsterdam office.

A rendering of how BrainOS-powered machines sense their environment. | Credit: Brain Corp

Brain planning worldwide expansion

The European headquarters marks the second international office in Brain’s global expansion. The company opened an office in Tokyo in 2017. This made sense for a couple of reasons. Japanese tech giant Softbank led Brain’s $114 million funding round in mid-2017 via the Softbank Vision Fund. And Softbank’s new autonomous floor cleaning robot, Whiz, uses Brain’s autonomous navigation stack.

Agnos said Brain is planning to add other regional offices after Amsterdam. The dates are in flux, but future expansion includes:

  • Further growth in Europe in 2020
  • Expansion in Asia Pacific, specifically Australia and Korea, in mid- to late-2020
  • South America afterwards

“We follow our partners’ needs,” said Agnos. “We are becoming a global company with support offices around the world. The hardest part is we can’t expand fast enough. Our OEM partners already have large, global customer bases. We need to have the right people and infrastructure in each location.”

BrainOS-powered robots

BrainOS, the company’s cloud-connected operating system, currently powers thousands of floor care robots across numerous environments. Brain recently partnered with Nilfisk, a Copenhagen, Denmark-based cleaning solutions provider that has been around for 110-plus years. Nilfisk is licensing the BrainOS platform for the production, deployment, and support of its robotic floor cleaners.

Walmart, the world’s largest retailer, has 360 BrainOS-powered machines cleaning its stores across the United States. A human needs to initially teach the BrainOS-powered machines the layout of the stores. But after that initial demo, BrainOS’ combination of off-the-shelf hardware, sensors, and software enable the floor scrubbers to navigate autonomously. Brain employs a collection of cameras, sensors and LiDAR to ensure safety and obstacle avoidance. All the robots are connected to a cloud-based reporting system that allows them to be monitored and managed.

At ProMat 2019, Brain debuted AutoDelivery, a proof-of-concept autonomous delivery robot designed for retail stores, warehouses, and factories. AutoDelivery, which can tow several cart types, boasts cameras, 4G LTE connectivity, and routing algorithms that allow it to learn its way around a store. AutoDelivery isn’t slated for commercial launch until early 2020.

Izhikevich recently told The Robot Report that Brain is exploring other types of mobile applications, including delivery, eldercare, security and more. In July 2018, Brain led a $13.4 million Series B for Savioke, which makes autonomous delivery robots. For years, Savioke built its autonomous navigation stack from scratch using ROS.

Augmenting SLAM with deep learning

Some elements of the Spatial AI real-time computation graph. Click image to enlarge. Credit: SLAMcore

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a robot’s location within it. SLAM is being gradually developed towards Spatial AI, the common sense spatial reasoning that will enable robots and other artificial devices to operate in general ways in their environments.

This will enable robots to not just localize and build geometric maps, but actually interact intelligently with scenes and objects.

Enabling semantic meaning

A key technology that is helping this progress is deep learning, which has enabled many recent breakthroughs in computer vision and other areas of AI. In the context of Spatial AI, deep learning has most obviously had a big impact on bringing semantic meaning to geometric maps of the world.

Convolutional neural networks (CNNs) trained to semantically segment images or volumes have been used in research systems to label geometric reconstructions in a dense, element-by-element manner. Networks like Mask-RCNN, which detect precise object instances in images, have been demonstrated in systems that reconstruct explicit maps of static or moving 3D objects.

Deep learning vs. estimation

In these approaches, the divide between deep learning methods for semantics and hand-designed estimation methods for geometrical estimation is clear. More remarkable, at least to those of us from an estimation background, has been the emergence of learning techniques that now offer promising solutions to geometrical estimation problems. Networks can be trained to predict robust frame-to-frame visual odometry; dense optical flow prediction; or depth prediction from a single image.

When compared to hand-designed methods for the same tasks, these methods are strong on robustness, since they will always make predictions that are similar to real scenarios present in their training data. But designed methods still often have advantages in flexibility in a range of unforeseen scenarios, and in final accuracy due to the use of precise iterative optimization.

The three levels of SLAM, according to SLAMcore. Credit: SLAMcore”

The role of modular design

It is clear that Spatial AI will make increasingly strong use of deep learning methods, but an excellent question is whether we will eventually deploy systems where a single deep network trained end to end implements the whole of Spatial AI.  While this is possible in principle, we believe that this is a very long-term path and that there is much more potential in the coming years to consider systems with modular combinations of designed and learned techniques.

There is an almost continuous sliding scale of possible ways to formulate such modular systems. The end-to-end learning approach is ‘pure’ in the sense that it makes minimum assumptions about the representation and computation that the system needs to complete its tasks. Deep learning is free to discover such representations as it sees fit. Every piece of design which goes into a module of the system or the ways in which modules are connected reduces that freedom. However, modular design can make the learning process tractable and flexible, and dramatically reduce the need for training data.

Building in the right assumptions

There are certain characteristics of the real world that Spatial AI systems must work in that seem so elementary that it is unnecessary to spend training capacity on learning them. These could include:

  • Basic geometry of 3D transformation as a camera sees the world from different views
  • Physics of how objects fall and interact
  • The simple fact that the natural world is made up of separable objects at all
  • Environments are made up of many objects in configurations with a typical range of variability over time which can be estimated and mapped.

By building these and other assumptions into modular estimation frameworks that still have significant deep learning capacity in the areas of both semantics and geometrical estimation, we believe that we can make rapid progress towards highly capable and adaptable Spatial AI systems. Modular systems have the further key advantage over purely learned methods that they can be inspected, debugged and controlled by their human users, which is key to the reliability and safety of products.

We still believe fundamentally in Spatial AI as a SLAM problem, and that a recognizable mapping capability will be the key to enabling robots and other intelligent devices to perform complicated, multi-stage tasks in their environments.

For those who want to read more about this area, please see my paper “FutureMapping: The Computational Structure of Spatial AI Systems.”

Andrew Davison, SLAMcore

About the Author

Professor Andrew Davison is a co-founder of SLAMcore, a London-based company that is on a mission to make spatial AI accessible to all. SLAMcore develops algorithms that help robots and drones understand where they are and what’s around them – in an affordable way.

Davison is Professor of Robot Vision at the Department of Computing, Imperial College London and leads Imperial’s Robot Vision Research Group has spent 20 years conducting pioneering research in visual SLAM, with a particular emphasis on methods that work in real-time with commodity cameras.

He has developed and collaborated on breakthrough SLAM systems including MonoSLAM and KinectFusion, and his research contributions have over 15,000 academic citations. He also has extensive experience of collaborating with industry on the application of SLAM methods to real products.

Stanford Doggo robot acrobatically traverses tough terrain

Putting their own twist on robots that amble through complicated landscapes, the Stanford Student Robotics club’s Extreme Mobility team at Stanford University has developed a four-legged robot that is not only capable of performing acrobatic tricks and traversing challenging terrain, but is also designed with reproducibility in mind. Anyone who wants their own version of the robot, dubbed Stanford Doggo, can consult comprehensive plans, code and a supply list that the students have made freely available online.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Nathan Kau, ’20, a mechanical engineering major and lead for Extreme Mobility. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

Whereas other similar robots can cost tens or hundreds of thousands of dollars and require customized parts, the Extreme Mobility students estimate the cost of Stanford Doggo at less than $3,000 — including manufacturing and shipping costs. Nearly all the components can be bought as-is online. The Stanford students said they hope the accessibility of these resources inspires a community of Stanford Doggo makers and researchers who develop innovative and meaningful spinoffs from their work.

Stanford Doggo can already walk, trot, dance, hop, jump, and perform the occasional backflip. The students are working on a larger version of their creation — which is currently about the size of a beagle — but they will take a short break to present Stanford Doggo at the International Conference on Robotics and Automation (ICRA) on May 21 in Montreal.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

A hop, a jump and a backflip

In order to make Stanford Doggo replicable, the students built it from scratch. This meant spending a lot of time researching easily attainable supplies and testing each part as they made it, without relying on simulations.

“It’s been about two years since we first had the idea to make a quadruped. We’ve definitely made several prototypes before we actually started working on this iteration of the dog,” said Natalie Ferrante, Class of 2019, a mechanical engineering co-terminal student and Extreme Mobility Team member. “It was very exciting the first time we got him to walk.”

Stanford Doggo’s first steps were admittedly toddling, but now the robot can maintain a consistent gait and desired trajectory, even as it encounters different terrains. It does this with the help of motors that sense external forces on the robot and determine how much force and torque each leg should apply in response. These motors recompute at 8,000 times a second and are essential to the robot’s signature dance: a bouncy boogie that hides the fact that it has no springs.

Instead, the motors act like a system of virtual springs, smoothly but perkily rebounding the robot into proper form whenever they sense it’s out of position.

Among the skills and tricks the team added to the robot’s repertoire, the students were exceptionally surprised at its jumping prowess. Running Stanford Doggo through its paces one (very) early morning in the lab, the team realized it was effortlessly popping up 2 feet in the air. By pushing the limits of the robot’s software, Stanford Doggo was able to jump 3, then 3½ feet off the ground.

“This was when we realized that the robot was, in some respects, higher performing than other quadruped robots used in research, even though it was really low cost,” recalled Kau.

Since then, the students have taught Stanford Doggo to do a backflip – but always on padding to allow for rapid trial and error experimentation.

Stanford Doggo robot acrobatically traverses tough terrain

Stanford students have developed Doggo, a relatively low-cost four-legged robot that can trot, jump and flip. (Image credit: Kurt Hickman)

What will Stanford Doggo do next?

If these students have it their way, the future of Stanford Doggo in the hands of the masses.

“We’re hoping to provide a baseline system that anyone could build,” said Patrick Slade, graduate student in aeronautics and astronautics and mentor for Extreme Mobility. “Say, for example, you wanted to work on search and rescue; you could outfit it with sensors and write code on top of ours that would let it climb rock piles or excavate through caves. Or maybe it’s picking up stuff with an arm or carrying a package.”

That’s not to say they aren’t continuing their own work. Extreme Mobility is collaborating with the Robotic Exploration Lab of Zachary Manchester, assistant professor of aeronautics and astronautics at Stanford, to test new control systems on a second Stanford Doggo. The team has also finished constructing a robot twice the size of Stanford Doggo that can carry about 6 kilograms of equipment. Its name is Stanford Woofer.

Note: This article is republished from the Stanford University News Service.

Techmetics introduces robot fleet to U.S. hotels and hospitals

Fleets of autonomous mobile robots have been growing in warehouses and the service industry. Singapore-based Techmetics has entered the U.S. market with ambitions to supply multiple markets, which it already does overseas.

The company last month launched two new lines of autonomous mobile robots. The Techi Butler is designed to serve hotel guests or hospital patients by interacting with them via a touchscreen or smartphone. It can deliver packages, room-service orders, and linens and towels.

The Techi Cart is intended to serve back-of-house services such as laundry rooms, kitchens, and housekeeping departments.

“Techmetics serves 10 different applications, including manufacturing, casinos, and small and midsize businesses,” said Mathan Muthupillai, founder and CEO of Techmetics. “We’re starting with just two in the U.S. — hospitality and healthcare.”

Building a base

Muthupillai founded Techmetics in Singapore in 2012. “We spent the first three years on research and development,” he told The Robot Report. “By the end of 2014, we started sending out solutions.”

“The R&D team didn’t just start with product development,” recalled Muthupillai. “We started with finding clients first, identified their pain points and expectations, and got feedback on what they needed.”

“A lot of other companies make a robotic base, but then they have to build a payload solution,” he said. “We started with a good robot base that we found and added our body, software layer, and interfaces. We didn’t want to build autonomous navigation from scratch.”

“Now, we’re just getting components — lasers, sensors, motors — and building everything ourselves,” he explained. “The navigation and flow-management software are created in-house. We’ve created our own proprietary software.”

“We have a range of products, all of which use 2-D SLAM [simultaneous localization and mapping], autonomous navigation, and many safety sensors,” Muthupillai added. “They come with three lasers — two vertical and one horizontal for path planning. We’re working on a 3-D-based navigation solution.”

“Our robots are based on ROS [the Robot Operating System],” said Muthupillai. “We’ve created a unique solution that comes with third-party interfaces.”

Techmetics offers multiple robot models for different industries.

Source: Techmetics

Techmetics payloads vary

The payload capacity of Techmetics’ robots depends on the application and accessories and ranges from 250 to 550 lb. (120 to 250 kg).

“The payload and software are based on the behavior patterns in an industry,” said Muthupillai. “In manufacturing or warehousing, people are used to working around robots, but in the service sector, there are new people all the time. The robot must respond to them — they may stay in its path or try to stop it.”

“When we started this company, there were few mobile robots for the manufacturing industry. They looked industrial and had relatively few safety features because they weren’t near people,” he said. “We changed the form factor for hospitality to be good-looking and safer.”

“When we talk with hotels about the Butler robots, they needed something that could go to multiple rooms,” Muthupillai explained. “Usually, staffers take two to three items in a single trip, so if a robot went to only one room and then returned, that would be a waste of time. Our robots have three compartment levels based on this feedback.”

Elevators posed a challenge for the Techi Butler and Techi Cart — not just for interoperability, but also for human-machine interaction, he said.

“Again, people working with robots didn’t share elevators with robots, but in hospitals and hotels, the robot needs to complete its job alongside people,” Muthupillai said. “After three years, we’re still modifying or adding functionalities, and the robots can take an elevator or go across to different buildings.”

“We’re not currently focusing on the supply chain industry, but we will license and launch the base into the market so that third parties can create their own solutions,” he said.

Techmetics' Techi Cart transports linens

Techi Cart transports linens and towels in a hotel or hospital. Source: Techmetics

Differentiators for Techi Butler and Cart

“We provide 10 robot models for four industries — no single company is a competitor for all our markets,” said Muthupillai. “We have three key differentiators.”

“First, customers can engage one vendor for multiple needs, and all of our robots can interact with one another,” he said. “Second, we talk with our clients and are always open to customization — for example, about compartment size — that other’s can’t do.”

“Third, we work across industries and can share our advantages across them,” Muthupillai claimed. “Since we already work with the healthcare industry, we already comply with safety and other regulations.”

“In hospitals or hotels, it’s not just about delivering a product from one point to another,” he said. “We’re adding camera and voice-recognition capabilities. If a robot sees a person who’s lost, it can help them.”

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Distribution and expansion

Techmetics’ mobile robots are manufactured in Thailand. According to Muthupillai, 80% of its robots are deployed in hotels and hospitals, and 20% are in manufacturing. The company already has distributors in Australia, Taiwan, and Thailand, and it is leveraging existing international clients for its expansion.

“We have many corporate clients in Singapore,” Muthupillai said. “The Las Vegas Sands Singapore has deployed 10 robots, and their headquarters in Las Vegas is considering deploying our products.”

“Also, U.K.-based Yotel has two hotels in Singapore, and its London branch is also interested,” he added. “The Miami Yotel is already using our robots, and soon they will be in San Francisco.”

Techmetics has three models for customers to choose from. The first is outright purchase, and the second is a two- or three-year lease. “The third model is innovative — they can try the robots from three to six months or one year and then buy,” Muthupillai said.

Muthupillai said he has moved to Techmetics’ branch office in the U.S. to manage its expansion. “We’ll be doing direct marketing in California, and we’re in the process of identifying partners, especially on the East Coast.”

“Only the theme, colors, or logos changed. No special modifications were necessary for the U.S. market,” he said. “We followed safety regulations overseas, but they were tied to U.S. regulations.”

“We will target the retail industry with a robot concierge, probably by the end of this year,” said Muthupillai. “We will eventually offer all 10 models in the U.S.”

Wenco, Hitachi Construction Machinery announce open ecosystem for autonomous mining

Wenco, Hitachi Construction Machinery announce open ecosystem for autonomous mining

Autonomous mining haulage in Australia. Source: Wenco

TOKYO — Hitachi Construction Machinery Co. last week announced its vision for autonomous mining — an open, interoperable ecosystem of partners that integrate their systems alongside existing mine infrastructure.

Grounded in support for ISO standards and a drive to encourage new entrants into the mining industry, Hitachi Construction Machinery (HCM) said it is pioneering this approach to autonomy among global mining technology leaders. HCM has now publicly declared support for standards-based autonomy and is offering its technology to assist mining customers in integrating new vendors into their existing infrastructure. HCM’s support for open, interoperable autonomy is based on its philosophy for its partner-focused Solution Linkage platform.

“Open innovation is the guiding technological philosophy for Solution Linkage,” said Hideshi Fukumoto, vice president, executive officer, and chief technology officer at HCM. “Based on this philosophy, HCM is announcing its commitment to championing the customer enablement of autonomous mining through an open, interoperable ecosystem of partner solutions.”

“We believe this open approach provides customers the greatest flexibility and control for integrating new autonomous solutions into their existing operations while reducing associated risks and costs of alternative approaches.,” he said.

The HCM Group is developing this open autonomy approach under the Solution Linkage initiative, a platform already available to HCM’s customers in the construction industry now being made available to mining customers with support from HCM subsidiary Wenco International Mining Systems (Wenco).

Three development principles for Wenco, Hitachi

Solution Linkage is a standards-based platform grounded on three principles: open innovation, interoperability, and a partner ecosystem.

In this context, “open innovation” means the HCM Group’s support for open standards to enable the creation of multi-vendor solutions that reduce costs and increase value for customers.

By designing solutions in compliance with ANSI/ISA-95 and ISO standards for autonomous interoperability, Solution Linkage avoids vendor lock-in and offers customers the freedom to choose technologies from preferred vendors independent of their fleet management system, HCM said. This approach future-proofs customer technology infrastructure, providing a phased approach for their incorporation of new technologies as they emerge, claimed the company.

This approach also benefits autonomy vendors who are new to mining, since they will be able to leverage a HCM’s technology and experience in meeting the requirements of mining customers.

The HCM Group’s key capability of interoperability creates simplified connectivity between systems to reduce operational silos, enabling end-to-end visibility and control across the mining value chain. HCM said that customers can use Solution Linkage to connect autonomous equipment from multiple vendors into existing fleet management and operations infrastructure.

The interoperability principle could also provide mines a systems-level understanding of their pit-to-port operation, providing access to more robust data analytics and process management. This capability would enable mine managers to make superior decisions based on operation-wide insight that deliver end-to-end optimization, said HCM.

Wenco and Hitachi have set open interoperability as goals for mining automation

Mining customers think about productivity and profitability throughout their entire operation, from geology to transportation — from pit to port. Source: Wenco

HCM’s said its partner ecosystem will allow customers and third-party partners to use its experience and open platform to successfully provide autonomous functionality and reduce the risk of technological adoption. This initiative is already working with a global mining leader to integrate non-mining OEM autonomous vehicles into their existing mining infrastructure.

Likewise, HCM is actively seeking customer and vendor partnerships to further extend the value of this open, interoperable platform. If autonomy vendors have already been selected by a customer and are struggling to integrate into the client’s existing fleet management system or mine operations, Hitachi may be able to help using the Solution Linkage platform.

The HCM Group will reveal further details of its approach to open autonomy and Solution Linkage in a presentation at the CIM 2019 Convention, running April 28 to May 1 at the Palais de Congrès in Montreal, Canada. Fukumoto and other senior executives from Hitachi and Wenco will discuss this strategy and details of Hitachi’s plans for mining in several presentations throughout the event. The schedule of Hitachi-related events is as follows:

  • Sunday, April 28, 4:30 PM — A welcome speech at the event’s Opening Ceremonies by Wenco Board Member and HCM Executive Officer David Harvey;
  • Monday, April 29, 10:00 AM — An Innovation Stage presentation on the Solution Linkage vision for open autonomy by Wenco Board Member and HCM Vice President and Executive Officer, CTO Hideshi Fukumoto;
  • Monday, April 29, 12:00 PM — Case Study: Accelerating Business Decisions and Mine Performance Through Operational Data Analysis at an Australian Coal Operation technical breakout presentation by Wenco Executive Vice-President of Corporate Strategy Eric Winsborrow;
  • Monday, April 29, 2:00 PM — Toward an Open Standard in Autonomous Control System Interfaces: Current Issues and Best Practices technical breakout presentation by Wenco Director of Technology Martin Politick;
  • Tuesday, April 30, 10:00 AM — An Innovation Stage presentation on Hitachi’s vision for data and IoT in mining by Wenco Executive Vice-President of Corporate Strategy Eric Winsborrow;
  • Wednesday, May 1, 4:00 PM — A concluding speech at the event’s closing luncheon by Wenco Board Member and HCM General Manager of Solution Business Center Yoshinori Furuno.

These presentations further detail the ongoing work of HCM and support the core message about open, interoperable, partner ecosystems.

To learn more about the HCM announcement in support of open and interoperable mining autonomy, Solution Linkage, or other HCM’s solutions, please contact Hitachi Construction Machinery.