SLAMcore spatial intelligence software now fully supports ROS 2

SlamCore cartoon of robots looking at a map before entering a warehouse

SLAMcore enables robots to understand their environment and maintain localization within a map.

SLAMcore’s spatial intelligence software and SDK is now fully compatible with ROS 2. The Robot Operating System (ROS) is an open-source collection of software frameworks for robotics development. SLAMcore also supports ROS 1, allowing developers to integrate vision-based SLAM software into a variety of robots.

SLAMcore’s vision-based SLAM allows full 3D mapping and path planning within ROS 2 and supports the development of semantic mapping to add understanding of objects within a map. The company said its algorithms make use of several enhancements in ROS 2, specifically the upgraded Nav 2 stack that supports fully autonomous navigation and enhanced support for embedded processors.

Founded in 2016, London-based SLAMcore summed up the benefits of supporting ROS 2 as follows:

  • Enhanced SLAM efficiency for better memory and processor utilization: providing accurate, real-time position (in 6 degrees of freedom) running locally on minimal compute/memory that frees up compute/memory for product capabilities.
  • Full 3D mapping and path planning: offering accurate, dense, 3D voxel-based maps for accurate maps of the robot’s surroundings for navigation purposes.
  • Potential for semantic object maps: providing access to future SLAMcore capabilities including semantic object identification and labelling within maps.

“Our customers are looking to deploy robots in real-world and at-scale situations and are turning to vision-based SLAM systems for efficient mapping, location and positioning,” said SLAMcore CEO Owen Nicholson. “Integrating SLAMcore’s leading spatial intelligence with ROS 2 designs is a straightforward and highly cost-effective approach for them to quickly address complex SLAM challenges and move projects forward faster.”

Related: Overcoming the robotics tower of babel

The SLAMcore SDK, with support for ROS, ROS 2 and C++ interfaces is available now. It can be downloaded from SLAMcore.com and deployed with standard hardware. A wide range of hardware and bespoke application set-ups are supported by SLAMcore’s engineers and next-generation capabilities are being explored at SLAMcore Labs.

The post SLAMcore spatial intelligence software now fully supports ROS 2 appeared first on The Robot Report.

Developing open-source systems for first responder legged robots

open source bipedal robots university of michiganDigit, a bipedal robot from Agility Robotics, being tested at the University of Michigan. | Photo Credit: Joseph Xu/University of Michigan Engineering

Tomorrow’s wildfire fighters and other first responders may tag-team with robotic assistants that can hike through wilderness areas and disaster zones, thanks to a University of Michigan research project funded by a new $1 million grant from the National Science Foundation.

A key goal of the three-year project is to enable robots to navigate in real time, without the need for a pre-existing map of the terrain they’re to traverse. The project aims to take bipedal (two-legged) walking robots to a new level, equipping them to adapt on the fly to treacherous ground, dodge obstacles or decide whether a given area is safe for walking. The technology could enable robots to go into areas that are too dangerous for humans, including collapsed buildings and other disaster areas. It could also lead to prosthetics that are more intuitive for their users.

“I envision a robot that can walk autonomously through the forest here on North Campus and find an object we’ve hidden. That’s what’s needed for robots to be useful in search and rescue, and no robot right now can do it,” said Jessy Grizzle, principal investigator on the project and the Elmer G. Gilbert Distinguished University Professor of Engineering at U-M.

Grizzle, an expert in legged robots, is partnering on the project with Maani Ghaffari Jadidi, an assistant professor of naval architecture and marine engineering and expert in robotic perception. Grizzle says the pair’s complementary areas of expertise will enable them to work on broader swathes of technology than has been possible in the past.

To make it happen, the team will embrace an approach called “full-stack robotics,” integrating a series of new and existing pieces of technology into a single, open-source perception and movement system that can be adapted to robots beyond those used in the project itself. The technology will be tested on Digit and Mini Cheetah robots.

“What full-stack robotics means is that we’re attacking every layer of the problem at once and integrating them together,” Grizzle said. “Up to now, a lot of roboticists have been solving very specific individual problems. With this project, we aim to integrate what has already been done into a cohesive system, then identify its weak points and develop new technology where necessary to fill in the gaps.”

mini cheetah robot university of michigan

A Mini-Cheetah robot at The University of Michigan. | Photo Credit: Robert Coelius, University of Michigan Engineering.

One area of particular focus will be mapping – the project aims to find ways for robots to develop rich, multidimensional maps based on real-time sensory input so that they can determine the best way to cover a given patch of ground.

“When we humans go hiking, it’s easy for us to recognize areas that are too difficult or dangerous and stay away,” Ghaffari said. “We want a robot to be able to do something similar by using its perception tools to build a real-time map that looks several steps ahead and includes a measure of walkability. So it will know to stay away from dangerous areas, and it will be able to plan a route that uses its energy efficiently.”

Grizzle predicts that legged robots will be able to do this using math – for example, by calculating a standard deviation of ground height variation or how slippery a surface is. He plans to build more sophisticated perceptual tools that will help robots gather data by analyzing what their limbs are doing—a slip on an icy surface or a kick on a mogul, for example, would generate a new data point. The system will also help robots navigate loose ground and moving objects, such as rolling branches.

Rich and easily understandable maps, Ghaffari explained, will be equally important to the humans who may one day be operating those robots remotely in search-and-rescue operations or other applications.

“A shared understanding of the environment between humans and robots is essential, because the more a human team can see, the better they can interpret what the robot team is trying to accomplish,” Ghaffari said. “And that can help humans to make better decisions about what other resources need to be brought in or how the mission should proceed.”

Agility Robotics, developer of the Digit robot, recently released the video below showcasing how its humanoid robots are being tested in warehousing applications.

The post Developing open-source systems for first responder legged robots appeared first on The Robot Report.