Freedom Robotics raises seed funding for robotics dev tools, fleet controls

Freedom Robotics fleet management

RMS enables fleet management and troubleshooting. Source: Freedom Robotics

SAN FRANCISCO — Freedom Robotics Inc. today announced that it has closed a $6.6 million seed round. The company provides a cloud-based software development infrastructure for managing fleets of robots.

Freedom Robotics cited a study by the World Economic Forum stating that, by 2025, machines will perform more tasks than humans, creating 58 million jobs worldwide. The company plans to use its funding to build its team and technology.

Freedom Robotics claimed that robotics startups can get their products to market 10 times faster by using its tools to do the “undifferentiated heavy lifting” rather than devoting employees to developing a full software stack. The company said its platform-agnostic Robotics Management Software (RMS) provides the “building blocks” for prototyping, building, operating, and scaling robot fleets.

Freedom Robotics builds RMS for developers

“We’ve seen that robotics is hard,” observed Dimitri Onistsuk, co-founder of Freedom Robotics. “In sixth grade, I wrote a letter to myself saying that I would go to MIT, drop out, and found a company that would change the world.”

Onistsuk did go to MIT, drop out, and draw on his experiences with Hans Lee and Joshua Wilson, now chief technology officer and CEO, respectively, at Freedom Robotics.

“We had been building things together before there was a cloud,” recalled Onistsuk. “Now in robotics, very few people have the ability to build a full stack.”

“We see robotics developers who have wonderful applications, like caring for the elderly; transportation; or dull, dirty, and dangerous work,” he said. “Everyone agrees on the value of this area, but they don’t realize the complexity of day-to-day iteration, which requires many engineers and a lot of infrastructure for support.”

“Robotics is like the Web in 2002, where everyone who wants to make an attempt has to raise $10 million and get expert talent in things like computer vision, mechatronics, systems integration, and ROS,” Onistsuk told The Robot Report. “It costs a lot of money to try even once to get a product to market.”

“We’ve combined layers of distinct software services by bringing modern software-development techniques into robotics, which traditionally had a hardware focus,” he said. “You can use one or many — whatever you have to do to scale.”

‘AWS for robots’

Freedom Robotics said that its cloud-based tools can be installed with just a single line of code, and its real-time visualization tools combine robotics management and analysis capabilities that were previously scattered across systems.

“Developers are always trying to improve their processes and learn new things,” said Onistsuk. “Amazon Web Services allows you to bring up a computer with a single line of code. We spent most of the first six months as a company figuring out how to do that for robots. We even bought the domain name ’90 seconds to go.'”

“You can drop in one line of code and immediately see real-time telemetry and have a cloud link to a robot from anywhere in the world,” he said. “Normally, when you want to adopt new components and are just trying to build a robot where the components talk to one another, that can take months.”

“During one on-boarding call, a customer was able to see within two minutes real-time telemetry from robots,” Onistsuk said. “They had never seen sensor-log and live-streaming data together. They thought the video was stuttering, but then an engineer noticed an error in a robot running production software. The bug had already been pushed out to customers. They never had the tools before to see all data in one place in developer-friendly ways.”

“That is the experience we’re getting when building software alongside the people who build robots,” he said. “With faster feedback loops, companies can iterate 10 times faster and move developers to other projects.”

https://www.freedomrobotics.ai/careers

Freedom Robotics’ RMS combines robotics tools to help developers and robotics managers. Source: Freedom Robotics

The same tools for development, management

Onistsuk said that his and Lee’s experience led them to follow standard software-development practices. “Some truths are real — for your core infrastructure, you shouldn’t have to own computers — our software is cloud-based for that reason,” he said.

“We stand on the shoulders of giants and practice what we preach,” Onistsuk asserted. “Pieces of our underlying infrastructure run on standard clouds, and we follow standard ways of building them.”

He said that not only does Freedom Robotics offer standardized development tools; it also uses them to build its RMS.

“With a little thought, for anything that you want to do with our product, you have access to the API calls across the entire fleet,” said Onistsuk. “We used the same APIs to build the product as you would use to run it.”

Freedom Robotics resource monitoring

Resource monitoring with RMS. Source: Freedom Robotics

Investors and interoperability

Initialized Capital led the funding round, with participation from Toyota AI Ventures, Green Cow Venture Capital, Joe Montana’s Liquid 2 Ventures, S28 Capital partner Andrew Miklas, and James Lindenbaum. They joined existing investors Kevin Mahaffey, Justin Kan, Matt Brezina, Arianna Simpson, and Josh Buckley.

“We’ll soon reach a point when there are more robots than cell phones, and we’ll need the ‘Microsoft of robotics’ platform to power such a massive market,” said Garry Tan, managing partner at Initialized Capital, which has backed companies such as Instacart, Coinbase, and Cruise.

“Cloud learning will be a game-changer for robotics, allowing the experience of one robot to be ‘taught’ to the rest on the network. We’ve been looking for startups with the technology and market savvy to realize this cloud robotics future through fleet management, control, and analytics,” said Jim Adler, founding managing director at Toyota AI Ventures. “We were impressed with Freedom Robotics’ customer-first, comprehensive approach to managing and controlling fleets of robots and look forward to supporting the Freedom team as they make cloud robotics a market reality.”

“We found out about Toyota AI Ventures through its Twitter account,” said Onistsuk. “We got some referrals and went and met with them. As the founder of multiple companies, Jim [Adler] understood us in a way that industry-specific VCs couldn’t. He got our experience in robotics, building teams, and data analytics.”

What about competing robotics development platforms? “We realized from Day 1 that we shouldn’t be fighting,” Onistsuk replied. “We’re fully integrated with the cloud offerings of Amazon, Google, and Microsoft, as well as ROS. We have drop-in compatibility.”

“What we’re trying to power with that is allowing developers to build things that differentiate their products and services and win customers,” he added. “This is similar to our cloud-based strategy. We try to be hardware-agnostic. We want RMS to work out of the box with as many tools and pieces of hardware as possible so that people can try things rapidly.”

Freedom Robotics raises seed funding for robotic dev tools, fleet controls

The Freedom Robotics team has raised seed funding. Source: Freedom Robotics

Hardware gets commoditized

“Hardware is getting commoditized and driving market opportunity,” said Onistsuk. “For instance, desktop compute is only $100 — not just Raspberry Pi, but x86 — you can buy a real computer running a full operating system.”

“Sensors are getting cheaper thanks to phones, and 3D printing will affect actuators. NVIDIA is putting AI into a small, low-power form factor,” he added. “With cheaper components, we’re looking for $5,000 robot arms rather than $500,000 arms, and lots of delivery companies are looking to make a vehicle autonomous and operating at a price point that’s competitive.”

“Companies can use RMS to build their next robots as a service [RaaS], and we’ve worked with everything from the largest entertainment companies to sidewalk delivery startups and multibillion-dollar delivery companies,” Onistsuk said. “Freedom Robotics is about democratizing robotics development and removing barriers to entry so that two guys in a garage can scale out to a business because of demand. The dreams of people with real needs in robotics will cause the next wave of innovation.”

“Software infrastructure is hard to do — we take what many developers consider boring so that they can sell robots into businesses or the home that get better over time,” he said.

Freedom Robotics logo

‘Inspiring’ feedback

Customer feedback so far has been “overwhelmingly inspiring,” said Onistsuk. “The best moments are getting an e-mail from a customer saying, ‘We’re using your product, and we thought we didn’t want some login or alerting plug-in. We have a demo tomorrow, and it would take four months to build it, but you can do it.'”

“We’ve seen from our interactions that the latest generation of robotics developers has different expectations,” he said. “We’re seeing them ‘skating to where the puck is,’ iterating quickly to build tools and services around our roadmap.”

“The RMS is not just used by developers,” Onistsuk said. “Development, operations, and business teams can find and solve problems in a collaborative way with the visualization tool. We can support teams managing multiple robots with just a tablet, and it integrates with Slack.”

“We can go from high-level data down to CPU utilization,” Lee said. “With one click, you can get a replay of GPS and telemetry data and see every robot with an error. Each section is usually one engineer’s responsibility.”

“A lot of times, people develop robots for university research or an application, but how does the robot perform in the field when it’s in a ditch?” said Lee. “We can enable developers to make sure robots perform better and safer.”

Freedom Robotics is currently being used in industries including agriculture, manufacturing, logistics, and restaurants, among others.

“This is similar to getting dev done in minutes, not months, and it could speed up the entire robotics industry,” Onistsuk added. “Investors are just as excited about the team, scaling the business, and new customers as I am.”

The post Freedom Robotics raises seed funding for robotics dev tools, fleet controls appeared first on The Robot Report.

Acutronic Robotics fails to find funding for H-ROS for robot hardware

Acutronic Robotics today announced on its blog that it is shutting down on July 31. The company, which has offices in Switzerland and Spain, offered communication tools based on the Robot Operating System for modular robot design.

The company, which was founded in 2016 after Acutronic Link Robotics AG’s acquisition of Erle Robotics, said it had been waiting on financing. Acutronic Robotics was developing the Hardware Robot Operating System or H-ROS, a communication bus to enable robot hardware to interoperate smoothly, securely, and safely.

Components of Acutronic’s technology included the H-ROS System on Module (SoM) device for the bus, ROS2 as the “universal robot language” and application programming interface, and the Hardware Robot Information Model (HRIM) as a common ROS dialect.

Acutronic was involved in the development of the open-source ROS2 and was recently named a “Top 10 ROS-based robotics company” for 2019. The company built MARA, the first robot natively running on ROS2.

In January, Acutronic Robotics said that it had made grippers from Robotiq “seamlessly interoperable with all other ROS2-speaking robotic components, regardless of their original manufacturer.”

Acutronic Robotics H-ROS

H-ROS was intended to make robot hardware work together more easily. Source: Acutronic Robotics

Funding challenges

HRIM was funded through the EU’s ROS-Industrial (ROSIN) project, and the U.S. Defense Advanced Projects Research Agency (DARPA) had invested in H-ROS.

In September 2017, Acutronic raised an unspecified amount of Series A funding led by the Sony Innovation Fund. More recently, however, the company had difficulty finding venture capital.

“We continue to believe that our robot modularity technology and vision are relevant strategically speaking, both product and positioning wise,” stated Victor Mayoral, CEO of Acutronic Robotics. however we probably hit the market too early and fell short of resources.”

According to Acutronic’s blog post, the company received acquisition proposals but was unable to agree to any of them.

The global robot operating system market will experience a compound annual growth rate of 8.8% between 2018 and 2026, predicts Transparency Market Research. However, that forecast includes proprietary industrial software and customized robots.

Other ROS-related news today included Freedom Robotics’ seed funding and Fetch Robotics’ Series C. As The Robot Report previously reported, AWS RoboMaker works with ROS Industrial, and Microsoft recently announced support for ROS in Windows 10.

Uncertain future for Acutronic team

Mayoral didn’t specify what would happen to Acutronic Robotics’ approximately 30 staffers or its intellectual property, but he tried to end on an optimistic note.

“We are absolutely convinced that ROS is a key blueprint for the future of robotics,” Mayoral said. “The ROS robotics community has been a constant inspiration for all of us over these past years, and I’m sure that with the new ROS 2, many more companies will be inspired in the same way. Our team members are excited about their next professional steps, and I’m sure many of us will stay very close to the ROS community.”

Acutronic Robotics staff

The Acutronic Robotics team. Source: Acutronic

The post Acutronic Robotics fails to find funding for H-ROS for robot hardware appeared first on The Robot Report.

Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems

SAN JOSE, Calif. — Velodyne Lidar Inc. today announced that it has acquired Mapper.ai’s mapping and localization software, as well as its intellectual property assets. Velodyne said that Mapper’s technology will enable it to accelerate development of the Vella software that establishes its directional view Velarray lidar sensor.

The Velarray is the first solid-state Velodyne lidar sensor that is embeddable and fits behind a windshield, said Velodyne, which described it as “an integral component for superior, more effective advanced driver assistance systems” (ADAS).

The company provides lidar sensors for autonomous vehicles and driver assistance. David Hall, Velodyne’s founder and CEO invented real-time surround-view lidar systems in 2005 as part of Velodyne Acoustics. His invention revolutionized perception and autonomy for automotive, new mobility, mapping, robotics, and security.

Velodyne said its high-performance product line includes a broad range of sensors, including the cost-effective Puck, the versatile Ultra Puck, and the autonomy-advancing Alpha Puck.

Mapper.ai staffers to join Velodyne

Mapper’s entire leadership and engineering teams will join Velodyne, bolstering the company’s large and growing software-development group. The talent from Mapper.ai will augment the current team of engineers working on Vella software, which will accelerate Velodyne’s production of ADAS systems.

Velodyne claimed its technology will allow customers to unlock advanced capabilities for ADAS features, including pedestrian and bicycle avoidance, Lane Keep Assistance (LKA), Automatic Emergency Braking (AEB), Adaptive Cruise Control (ACC), and Traffic Jam Assist (TJA).

“By adding Vella software to our broad portfolio of lidar technology, Velodyne is poised to revolutionize ADAS performance and safety,” stated Anand Gopalan, chief technology officer at Velodyne. “Expanding our team to develop Vella is a giant step towards achieving our goal of mass-producing an ADAS solution that dramatically improves roadway safety.”

“Mapper technology gives us access to some key algorithmic elements and accelerates our development timeline,” Gopalan added. “Together, our sensors and software will allow powerful lidar-based safety solutions to be available on every vehicle.”

Mapper.ai to contribute to Velodyne software

Mapper.ai developers will work on the Vella software for the Velarray sensor. Source: Velodyne Lidar

“Velodyne has both created the market for high-fidelity automotive lidar and established itself as the leader. We have been Velodyne customers for years and have already integrated their lidar sensors into easily deployable solutions for scalable high-definition mapping,” said Dr. Nikhil Naikal, founder and CEO of Mapper, who is joining Velodyne. “We are excited to use our technology to speed up Velodyne’s lidar-centric software approach to ADAS.”

In addition to ADAS, Velodyne said it will incorporate Mapper technology into lidar-centric solutions for other emerging applications, including autonomous vehicles, last-mile delivery services, security, smart cities, smart agriculture, robotics, and unmanned aerial vehicles.

The post Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems appeared first on The Robot Report.

LUKE prosthetic arm has sense of touch, can move in response to thoughts

Keven Walgamott had a good “feeling” about picking up the egg without crushing it. What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of LUKE, a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by University of Utah biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (named after the robotic hand that Luke Skywalker got in The Empire Strikes Back) to mimic the way a human hand feels objects by sending the appropriate signals to the brain.

Their findings were published in a new paper co-authored by University of Utah biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark, and other colleagues in the latest edition of the journal Science Robotics.

Sending the right messages

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up, and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the University of Utah, was able to pluck grapes without crushing them, pick up an egg without cracking it, and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

How those things are accomplished is through a complex series of mathematical calculations and modeling.

Kevin Walgamott LUKE arm

Kevin . Walgamott wears the LUKE prosthetic arm. Credit: University of Utah Center for Neural Interfaces

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the University of Utah team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by University of Utah biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array.

The Array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the Array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the University of Utah’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering, and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Editor’s note: Reposted from the University of Utah.

The post LUKE prosthetic arm has sense of touch, can move in response to thoughts appeared first on The Robot Report.

TIAGo++ robot from PAL Robotics ready for two-armed tasks

Among the challenges for developers of mobile manipulation and humanoid robots is the need for an affordable and flexible research platform. PAL Robotics last month announced its TIAGo++, a robot that includes two arms with seven degrees of freedom each.

As with PAL Robotics‘ one-armed TIAGo, the new model is based on the Robot Operating System (ROS) and can be expanded with additional sensors and end effectors. TIAGo++ is intended to enable engineers to create applications that include a touchscreen interface for human-robot interaction (HRI) and require simultaneous perception, bilateral manipulation, mobility, and artificial intelligence.

In addition, TIAGo++ supports NVIDIA’s Jetson TX2 as an extra for machine learning and deep learning development. Tutorials for ROS and open-source simulation for TIAGo are available online.

Barcelona, Spain-based PAL, which was named a “Top 10 ROS-based robotics company to watch in 2019,” also makes the Reem and TALOS robots.

Jordi Pagès, product manager of the TIAGo robot at PAL Robotics responded to the following questions about TIAGo++ from The Robot Report:

For the development of TIAGo++, how did you collect feedback from the robotics community?

Pagès: PAL Robotics has a long history in research and development. We have been creating service robotics platforms since 2004. When we started thinking about the TIAGo robot development, we asked researchers from academia and industry which features would they expect or value in a platform for research.

Our goal with TIAGo has always been the same: to deliver a robust platform for research that easily adapts to diverse robotics projects and use cases. That’s why it was key to be in touch with the robotics and AI developers from start.

After delivering the robots, we usually ask for feedback and stay in touch with the research centers to learn about their activities and experiences, and the possible improvements or suggestions they would have. We do the same with the teams that use TIAGo for competitions like RoboCup or the European Robotics League [ERL].

At the same time, TIAGo is used in diverse European-funded projects where end users from different sectors, from healthcare to industry, are involved. This allows us to also learn from their feedback and keep finding new ways in which the platform could be of help in a user-centered way. That’s how we knew that adding a second arm into the TIAGo portfolio of its modular possibilities could be of help to the robotics community.

How long did it take PAL Robotics to develop the two-armed TIAGo++ in comparison with the original model?

Pagès: Our TIAGo platform is very modular and robust, so it took us just few months from taking the decision to having a working TIAGo++ ready to go. The modularity of all our robots and our wide experience developing humanoids usually helps us a lot in reducing the redesign and production time.

The software is also very modular, with extensive use of ROS, the de facto standard robotics middleware. Our customers are able to upgrade, modify, and substitute ROS packages. That way, they can focus their attention on their real research on perception, navigation, manipulation, HRI, and AI.

How high can TIAGo++ go, and what’s its reach?

Pagès: TIAGo++ can reach the floor and up to 1.75m [5.74 ft.] high with each arm, thanks to the combination of its 7 DoF [seven degrees of freedom] arms and its lifting torso. The maximum extension of each arm is 92cm [36.2 in.]. In our experience, this workspace allows TIAGo to work in several environments like domestic, healthcare, and industry.

TIAGo++ robot from PAL Robotics

The TIAGo can extend in height, and each arm has a reach of about 3 ft. Source: PAL Robotics

What’s the advantage of seven degrees of freedom for TIAGo’s arms over six degrees?

Pagès: A 7-DoF arm is much better in this sense for people who will be doing manipulation tasks. Adding more DoFs means that the robot can arrive to more poses — positions and orientations — of its arm and end-effector that it couldn’t reach before.

Also, this enables developers to reduce singularities, avoiding non-desired abrupt movements. This means that TIAGo has more possibilities to move its arm and reach a certain pose in space, with a more optimal combination of movements.

What sensors and motors are in the robot? Are they off-the-shelf or custom?

Pagès: All our mobile-based platforms, like the TIAGo robot, combine many sensors. TIAGo has a laser and sonars to move around and localize itself in space, an IMU [inertial measurement unit], and an RGB-D camera in the head. It can have a force/torque sensor on the wrist, especially useful to work in HRI scenarios. It also has a microphone and a speaker.

TIAGo has current sensing in every joint of the arm, enabling a very soft, effortless torque control on each of the arms. The possibility of having an expansion panel with diverse connectors makes it really easy for developers to add even more sensors to it, like a thermal camera or a gripper camera, once they have TIAGo in their labs.

About the motors, TIAGo++ makes use our custom joints integrating high-quality commercial components and our own electronic power management and control. All motors also have encoders to measure the current motor position.

What’s the biggest challenge that a humanoid like TIAGo++ can help with?

Pagès: TIAGo++ can help with are those tasks that require bi-manipulation, in combination with navigation, perception, HRI, or AI. Even though it is true that a one-arm robot can already perform a wide range of tasks, there are many actions in our daily life that require of two arms, or that are more comfortably or quickly done with two arms rather than one.

For example, two arms are good for grasping and carrying a box, carrying a platter, serving liquids, opening a bottle or a jar, folding clothes, or opening a wardrobe while holding an object. In the end, our world and tools have been designed for the average human body, which is with two arms, so TIAGo++ can adapt to that.

As a research platform based on ROS, is there anything that isn’t open-source? Are navigation and manipulation built in or modular?

Pagès: Most software is provided either open-sourced or with headers and dynamic libraries so that customers can develop applications making use of the given APIs or using the corresponding ROS interfaces at runtime.

For example, all the controllers in TIAGo++ are plugins of ros_control, so customers can implement their own controllers following our public tutorials and deploy them on the real robot or in the simulation.

Moreover, users can replace any ROS package by their own packages. This approach is very modular, and even if we provide navigation and manipulation built-in, developers can use their own navigation and manipulation instead of ours.

Did PAL work with NVIDIA on design and interoperability, or is that an example of the flexibility of ROS?

Pagès: It is both an example of how easy is to expand TIAGo with external devices and how easy is to integrate in ROS these devices.

One example of applications that our clients have developed using the NVIDIA Jetson TX2 is the “Bring me a beer” task from the Homer Team [at RoboCup], at the University of Koblenz-Landau. They made a complete application in which TIAGo robot could understand a natural language request, navigate autonomously to the kitchen, open the fridge, recognize and select the requested beer, grasp it, and deliver it back to the person who asked for it.

As a company, we work with multiple partners, but we also believe that our users should be able to have a flexible platform that allows them to easily integrate off-the-shelf solutions they already have.

How much software support is there for human-machine interaction via a touchscreen?

Pagès: The idea behind integrating a touchscreen on TIAGo++ is to bring customers the possibility to implement their own graphical interface, so we provide full access to the device. We work intensively with researchers, and we provide platforms as open as our customers need, such as a haptic interface.

What do robotics developers need to know about safety and security?

Pagès: A list of safety measures and best practices are provided in the Handbook of TIAGo robot in order that customers ensure safety both around the robot and for the robot itself.

TIAGo also features some implicit control modes that help to ensure safety while operation. For example, an effort control mode for the arms is provided so that collisions can be detected and the arm can be set in gravity compensation mode.

Furthermore, the wrist can include a six-axis force/torque sensor providing more accurate feedback about collisions or interactions of the end effector with the environment. This sensor can be also used to increase the safety of the robot. We provide this information to our customers and developers so they are always aware about the safety measures.

Have any TIAGo users moved toward commercialization based on what they’ve learned with PAL’s systems?

Pagès: At the moment, from the TIAGo family, we commercialize the TIAGo Base for intralogistics automation in indoor spaces such as factories or warehouses.

Some configurations of the TIAGo robot have been tested in pilots in healthcare applications. In the EnrichMe H2020 EU Project, the robot gave assistance to old people at home autonomously for up to approximately two months.

In robotics competitions such as the ERL, teams have shown the quite outstanding performance of TIAGo in accomplishing specific actions in a domestic environment. Two teams ended first and third in the RoboCup@Home OPL 2019 in Sydney, Australia. The Homer Team won for the third time in a row using TIAGo — see it clean a toilet here.

The CATIE Robotics Team ended up third in the first world championship in which it participated. For instance, in one task, it took out the trash.

The TIAGo robot is also used for European Union Horizon 2020 experiments in which collaborative robots that combine mobility with manipulation are used in industrial scenarios. This includes projects such as MEMMO for motion generation, Co4Robots for coordination, and RobMoSys for open-source software development.

Besides this research aspect, we have industrial customers that are using TIAGo to improve their manufacturing procedures.

How does TIAGo++ compare with, say, Rethink Robotics’ Baxter?

Pagès: With TIAGo++, besides the platform itself, you also get support, extra advanced software solutions, and assessment from a company that continues to be in the robotics sector since more than 15 years ago. Robots like the TIAGo++ also use our know-how both in software and hardware, a knowledge that the team has been gathering from the development of cutting-edge biped humanoids like the torque-controlled TALOS.

From a technical point of view, TIAGo++ was made very compact to suit environments shared with people such as homes. Baxter was a very nice entry-point platform and was not originally designed to be a mobile manipulator but a fixed one. TIAGo++ can use the same navigation used in our commercial autonomous mobile robot for intralogistics tasks, the TIAGo Base.

Besides, TIAGo++ is a fully customizable robot in all aspects: You can select the options you want in hardware and software, so you get the ideal platform you want to have in your robotics lab. For a mobile manipulator with two 7-DoF arms, force/torque sensors, ROS-based, affordable, and with community support, we believe TIAGo++ should be a very good option.

The TIAGo community is growing around the world, and we are sure that we will see more and more robots helping people in different scenarios very soon.

What’s the price point for TIAGo++?

Pagès: The starting price is around €90,000 [$100,370 U.S.]. It really depends on the configuration, devices, computer power, sensors, and extras that each client can choose for their TIAGo robot, so the price can vary.

The post TIAGo++ robot from PAL Robotics ready for two-armed tasks appeared first on The Robot Report.

RaaS and AI help retail supply chains adopt and manage robotics, says Kindred VP

Unlike industrial automation, which has been affected by a decline in automotive sales worldwide, robots for e-commerce order fulfillment continue to face strong demand. Warehouses, third-part logistics providers, and grocers are turning to robots because of competitive pressures, labor scarcities, and consumer expectations of rapid delivery. However, robotics developers and suppliers must distinguish themselves in a crowded market. The Robotics-as-a-Service, or RaaS, model is one way to serve retail supply chain needs, said Kindred Inc.

By 2025, there will be more than 4 million robots in operation at 50,000 warehouses around the world, predicted ABI Research. It cited improvements in computer vision, artificial intelligence, and deep learning.

“Economically viable mobile manipulation robots from the likes of RightHand Robotics and Kindred Systems are now enabling a wider variety of individual items to be automatically picked and placed within a fulfillment operation,” said ABI Research. “By combining mobile robots, picking robots, and even autonomous forklifts, fulfillment centers can achieve greater levels of automation in an efficient and cost-effective way.”

“Many robot technology vendors are providing additional value by offering flexible pricing options,” stated the research firm. “Robotics-as-a-Service models mean that large CapEx costs can be replaced with more accessible OpEx costs that are directly proportional to the consumption of technologies or services, improving the affordability of robotics systems among the midmarket, further driving adoption.”

The Robot Report spoke with Victor Anjos, who recently joined San Francisco-based Kindred as vice president of engineering, about how AI and RaaS can help the logistics industry.

Kindred applies AI to sortation

Can you briefly describe Kindred’s offerings?

Anjos: Sure. Kindred makes AI-enhanced, autonomous, piece-picking robots. Today, they’re optimized to perform the piece-picking process in a fulfillment center, for example, in a facility that fills individual e-commerce orders.

It’s important to understand our solution is more than a shiny robotic arm. Besides the part you can see  — the robotic arm — our solution includes an AI platform to enable autonomous learning and in-motion planning, plus the latest in robotic technology, backed by our integration and support services.

The Robot Report visited Kindred at Automate/ProMat 2019 — what’s new since then?

Anjos: Since then, we’ve been hard at work on a new gripper optimized to handle rigid items like shampoo bottles and small cartons. We’ve got a ton of new AI models in development, and we continue to tune SORT’s performance using reinforcement learning.

What should engineers at user companies know about AutoGrasp and SORT?

Anjos: AutoGrasp is the unique combination of technologies behind SORT. There’s the AI-powered vision, grasping, and manipulation technology that allows the robot to quickly and accurately sort batches into discrete orders.

Then there’s the robotic device itself, which has been engineered for speed, agility and a wide range of motion. And finally, we offer WMS [warehouse management system] integration, process design, and deployment services, as well as ongoing maintenance and support, of course.

What use cases are better for collaborative robots or cobots versus standard industrial arms?

Anjos: Kindred’s solution is more than a robotic arm. It’s equipped with AI-enhanced computer vision, so it can work effectively in the dynamic conditions that we often find in a fulfillment environment. It responds to what it senses in real time and can even configure itself on the fly by changing the suction grip attachment while in motion.

The bottom line is, any solution that works for several different use cases is the result of compromises. That’s the nature of any multi-purpose device. We chose to optimize SORT for a specific step in the fulfillment process. That’s how we’re able to give it the ability to grasp, manipulate and place items with human-like accuracy — but with machine-like consistency and stamina.

And, like the people our robot works alongside of, SORT can learn on the job. Not only from its own experience, but based on the combined experience of other robots on the network as well.

RaaS can aid robotics adoption

RaaS Kindred Victor Anjos

Victor Anjos, VP of engineering, Kindred

Have you always offered both the AI and robotics elements of your products through an RaaS model?

Anjos: Yes, we have. Both are included in RaaS, and it has been an important part of our model.

Can you give an example of how RaaS works during implementation and then for ongoing support? What sorts of issues can arise?

Anjos: With our RaaS model, the assets are owned and maintained by Kindred, while the customer pays for the picking service as needed. Implementing RaaS eliminates the customer’s upfront capital expense.

Of course, the customer still needs to allocate operational and IT resources to make the RaaS implementation a success.

Is RaaS evolving or becoming more widespread and understood? Are there still pockets of supply chains that aren’t familiar with leasing models?

Anjos: RaaS is a relatively new concept for the supply chain industry, but it’s attracting a lot of attention. The financial model aligns with their operating budgets. And customers have an ability to scale the use of robots to meet peak demand, increasing asset utilization throughout the year.

Are there situations where it’s better to develop robots in-house or buy them outright than to use RaaS?

Anjos: Every customer I’ve spoken with has their hands full managing fulfillment operations. They’re not very eager to hire a team of AI developers to build a fleet of robots and hire engineers to maintain them! And Kindred isn’t interested in selling apparel, so it all works out!

What issues can arise during a RaaS relationship, and how much should providers and clients collaborate?

Anjos: Every supply chain system implementation is unique. During implementation, Kindred’s customer-success team works with our customer to understand performance requirements, integrate Kindred robots into their existing warehouse processes and systems, and provide onsite and remote support to ensure the success of each implementation.

Do you see RaaS spreading from order fulfillment to retail stores? What else would you like to see?

Anjos: That’s very possible. Robot use is increasing across the entire retail industry, and the RaaS model certainly makes adoption of this technology even easier and more beneficial.

For example, I can see how some of the robotic technologies developed for traditional fulfillment centers could be used in an urban or micro-fulfillment centers scenario.

The post RaaS and AI help retail supply chains adopt and manage robotics, says Kindred VP appeared first on The Robot Report.

Neural Analytics partners with NGK Spark Plug to scale up medical robots

Neural Analytics partners with NGL Spark Plug to scale up medical robots

The Lucid Robotic System has received FDA clearance. Source: Neural Analytics

LOS ANGELES — Neural Analytics Inc., a medical robotics company developing and commercializing technologies to measure and track brain health, has announced a strategic partnership with NGK Spark Plug Co., a Japan-based company that specializes in comprehensive ceramics processing. Neural Analytics said the partnership will allow it to expand its manufacturing capabilities and global footprint.

Neural Analytics’ Lucid Robotic System (LRS) includes the Lucid M1 Transcranial Doppler Ultrasound System and NeuralBot system. The resulting autonomous robotic transcranial doppler (rTCD) platform is designed to non-invasively search, measure, and display objective brain blood-flow information in real time.

The Los Angeles-based company’s technology integrates ultrasound and robotics to empower clinicians with critical information about brain health to make clinical decisions. Through its algorithm, analytics, and autonomous robotics, Neural Analytics provides valuable information that can identify pathologies such as Patent Foramen Ovale (PFO), a form of right-to-left shunt.

Nagoya, Japan-based NGK Spark Plug claims to be the world’s leading manufacturer of spark plugs and automotive sensors, as well as a broad lineup of packaging, cutting tools, bio ceramics, and industrial ceramics. The company has more than 15,000 employees and develops products related to the environment, energy, next-generation vehicles, and the medical device and diagnostic industries.

Neural Analytics and NGK to provide high-quality parts, global access

“This strategic partnership between Neural Analytics and NGK Spark Plug is built on a shared vision for the future of global healthcare and a foundation of common values,” said Leo Petrossian, Ph.D., co-founder and CEO of Neural Analytics. “We are honored with this opportunity and look forward to learning from our new partners how they have built a great global enterprise,”

NGK Spark Plug has vast manufacturing expertise in ultra-high precision ceramics. With this partnership, both companies said they are committed in working together to build high-quality products at a reasonable cost to allow greater access to technologies like the Lucid Robotic System.

“I am very pleased with this strategic partnership with Neural Analytics,” said Toru Matsui, executive vice president of NGK Spark Plug. “This, combined with a shared vision, is an exciting opportunity for both companies. This alliance enables the acceleration of their great technology to the greater market.”

This follows Neural Analytics’ May announcement of its Series C round close, led by Alpha Edison. In total, the company has raised approximately $70 million in funding to date.

Neural Analytics said it remains “committed to advancing brain healthcare through transformative technology to empower clinicians with the critical information needed to make clinical decisions and improve patient outcomes.”

The post Neural Analytics partners with NGK Spark Plug to scale up medical robots appeared first on The Robot Report.

Sea Machines Robotics to demonstrate autonomous spill response

Sea Machines Robotics to demonstrate autonomous spill response

Source: Sea Machines Robotics

BOSTON — Sea Machines Robotics Inc. this week said it has entered into a cooperative agreement with the U.S. Department of Transportation’s Maritime Administration to demonstrate the ability of its autonomous technology in increasing the safety, response time and productivity of marine oil-spill response operations.

Sea Machines was founded in 2015 and claimed to be “the leader in pioneering autonomous control and advanced perception systems for the marine industries.” The company builds software and systems to increase the safety, efficiency, and performance of ships, workboats, and commercial vessels worldwide.

The U.S. Maritime Administration (MARAD) is an agency of the U.S. Department of Transportation that promotes waterborne transportation and its integration with other segments of the transportation system.

Preparing for oil-spill exercise

To make the on-water exercises possible, Sea Machines will install its SM300 autonomous-command system aboard a MARCO skimming vessel owned by Marine Spill Response Corp. (MSRC), a not-for-profit, U.S. Coast Guard-classified oil spill removal organization (OSRO). MSRC was formed with the Marine Preservation Association to offer oil-spill response services in accordance with the Oil Pollution Act of 1990.

Sea Machines plans to train MSRC personnel to operate its system. Then, on Aug. 21, Sea Machines and MSRC will execute simulated oil-spill recovery exercises in the harbor of Portland, Maine, before an audience of government, naval, international, environmental, and industry partners.

The response skimming vessel is manufactured by Seattle-based Kvichak Marine Industries and is equipped with a MARCO filter belt skimmer to recover oil from the surface of the water. This vessel typically operates in coastal or near-shore areas. Once installed, the SM300 will give the MSRC vessel the following new capabilities:

  • Remote autonomous control from an onshore location or secondary vessel,
  • ENC-based mission planning,
  • Autonomous waypoint tracking,
  • Autonomous grid line tracking,
  • Collaborative autonomy for multi-vessel operations,
  • Wireless remote payload control to deploy onboard boom and other response equipment, and
  • Obstacle detection and collision avoidance.

Round-the-clock response

In addition, Sea Machines said, it enables minimally manned and unmanned autonomous maritime operations. Such configurations allow operators to respond to spill events 24/7 depending on recovery conditions, even when crews are unavailable or restricted, the company said. These configurations also reduce or eliminate exposure of crewmembers to toxic fumes and other safety hazards.

“Autonomous technology has the power to not only help prevent vessel accidents that can lead to spills, but can also facilitate better preparedness; aid in safer, efficient, and effective cleanup,” said CEO Michael G. Johnson, CEO of Sea Machines. “We look forward to working closely with MARAD and MSRC in these industry-modernizing exercises.”

“Our No. 1 priority is the safety of our personnel at MSRC,” said John Swift, vice president at MSRC. “The ability to use autonomous technology — allowing response operations to continue in an environment where their safety may be at risk — furthers our mission of response preparedness.”

Sea Machines promises rapid ROI for multiple vessels

Sea Machines’ SM Series of products, which includes the SM300 and SM200, provides marine operators a new era of task-driven, computer-guided vessel control, bringing advanced autonomy within reach for small- and large-scale operations. SM products can be installed aboard existing or new-build commercial vessels with return on investment typically seen within a year.

In addition, Sea Machines has received funding from Toyota AI Ventures.

Sea Machines is also a leading developer of advanced perception and navigation assistance technology for a range of vessel types, including container ships. The company is currently testing its perception and situational awareness technology aboard one of A.P. Moller-Maersk’s new-build ice-class container ships.

The post Sea Machines Robotics to demonstrate autonomous spill response appeared first on The Robot Report.

Microrobots activated by laser pulses could deliver medicine to tumors

Targeting medical treatment to an ailing body part is a practice as old as medicine itself. Drops go into itchy eyes. A broken arm goes into a cast. But often what ails us is inside the body and is not so easy to reach. In such cases, a treatment like surgery or chemotherapy might be called for. A pair of researchers in Caltech’s Division of Engineering and Applied Science are working on an entirely new form of treatment — microrobots that can deliver drugs to specific spots inside the body while being monitored and controlled from outside the body.

“The microrobot concept is really cool because you can get micromachinery right to where you need it,” said Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering at the California Institute of Technology. “It could be drug delivery, or a predesigned microsurgery.”

The microrobots are a joint research project of Wang and Wei Gao, assistant professor of medical engineering, and are intended for treating tumors in the digestive tract.

Developing jet-powered microrobots

The microrobots consist of microscopic spheres of magnesium metal coated with thin layers of gold and parylene, a polymer that resists digestion. The layers leave a circular portion of the sphere uncovered, kind of like a porthole. The uncovered portion of the magnesium reacts with the fluids in the digestive tract, generating small bubbles. The stream of bubbles acts like a jet and propels the sphere forward until it collides with nearby tissue.

On their own, magnesium spherical microrobots that can zoom around might be interesting, but they are not especially useful. To turn them from a novelty into a vehicle for delivering medication, Wang and Gao made some modifications to them.

First, a layer of medication is sandwiched between an individual microsphere and its parylene coat. Then, to protect the microrobots from the harsh environment of the stomach, they are enveloped in microcapsules made of paraffin wax.

Laser-guided delivery

At this stage, the spheres are capable of carrying drugs, but still lack the crucial ability to deliver them to a desired location. For that, Wang and Gao use photoacoustic computed tomography (PACT), a technique developed by Wang that uses pulses of infrared laser light.

The infrared laser light diffuses through tissues and is absorbed by oxygen-carrying hemoglobin molecules in red blood cells, causing the molecules to vibrate ultrasonically. Those ultrasonic vibrations are picked up by sensors pressed against the skin. The data from those sensors is used to create images of the internal structures of the body.

Previously, Wang has shown that variations of PACT can be used to identify breast tumors, or even individual cancer cells. With respect to the microrobots, the technique has two jobs. The first is imaging. By using PACT, the researchers can find tumors in the digestive tract and also track the location of the microrobots, which show up strongly in the PACT images.

Microrobots activated by laser pulses could deliver medicine to tumors

Microrobots activated by lasers and powered by magnesium jets could deliver medicine within the human body. Source: Caltech

Once the microrobots arrive in the vicinity of the tumor, a high-power continuous-wave near-infrared laser beam is used to activate them. Because the microrobots absorb the infrared light so strongly, they briefly heat up, melting the wax capsule surrounding them, and exposing them to digestive fluids.

At that point, the microrobots’ bubble jets activate, and the microrobots begin swarming. The jets are not steerable, so the technique is sort of a shotgun approach — the microrobots will not all hit the targeted area, but many will. When they do, they stick to the surface and begin releasing their medication payload.

“These micromotors can penetrate the mucus of the digestive tract and stay there for a long time. This improves medicine delivery,” Gao says. “But because they’re made of magnesium, they’re biocompatible and biodegradable.”

Pushing the concept

Tests in animal models show that the microrobots perform as intended, but Gao and Wang say they are planning to continue pushing the research forward.

“We demonstrated the concept that you can reach the diseased area and activate the microrobots,” Gao says. “The next step is evaluating the therapeutic effect of them.”

Gao also says he would like to develop variations of the microrobots that can operate in other parts of the body, and with different types of propulsion systems.

Wang says his goal is to improve how his PACT system interacts with the microrobots. The infrared laser light it uses has some difficulty reaching into deeper parts of the body, but he says it should be possible to develop a system that can penetrate further.

The paper describing the microrobot research, titled, “A microrobotic system guided by photoacoustic tomography for targeted navigation in intestines in vivo,” appears in the July 24 issue of Science Robotics. Other co-authors include Zhiguang Wu, Lei Li, Yiran Yang (MS ’18), Yang Li, and So-Yoon Yang of Caltech; and Peng Hu of Washington University in St. Louis. Funding for the research was provided by the National Institutes of Health and Caltech’s Donna and Benjamin M. Rosen Bioengineering Center.

Editor’s note: This article republished from the California Institute of Technology.

The post Microrobots activated by laser pulses could deliver medicine to tumors appeared first on The Robot Report.

ASTM International proposes standards guide, center of excellence for exoskeletons

One of the barriers to more widespread development and adoption of exoskeletons for industrial, medical, and military use has been a lack of standards. ASTM International this month proposed a guide to provide standardized tools to assess and improve the usability and usefulness of exoskeletons and exosuits.

“Exoskeletons and exosuits can open up a world of possibilities, from helping workers perform industrial tasks while not getting overstressed, to helping stroke victims learning to walk again, to helping soldiers carry heavier rucksacks longer distances,” said Kevin Purcell, an ergonomist at the U.S. Army Public Health Center’s Aberdeen Proving Ground. “But if it doesn’t help you perform your task and/or it’s hard to use, it won’t get used.”

He added that the guide will incorporate ways to understand the attributes of exoskeletons, as well as observation methods and questionnaires to help assess an exoskeleton’s performance and safety.

“The biggest challenge in creating this standard is that exoskeletons change greatly depending on the task the exoskeleton is designed to help,” said Purcell. “For instance, an industrial exoskeleton is a totally different design from one used for medical rehabilitation. The proposed standard will need to cover all types and industries.”

According to Purcell, industrial, medical rehabilitation, and defense users will benefit most from the proposed standard, as will exoskeleton manufacturers and regulatory bodies.

The F48 committee of ASTM International, previously known as he American Society for Testing and Materials, was formed in 2017. It is currently working on the proposed exoskeleton and exosuit standard, WK68719. Six subcommittees include about 150 members, including startups, government agencies, and enterprises such as Boeing and BMW.

ASTM publishes first standards

In May, ASTM International published its first two standards documents, which are intended to provide consensus terminology (F3323) and set forth basic labeling and other informational requirements (F3358). The standards are available for purchase.

“Exoskeletons embody the technological promise of empowering humans to be all they can be,” said F48 committee member William Billotte, a physical scientist at the U.S. National Institute of Standards and Technology (NIST). “We want to make sure that labels and product information are clear, so that exoskeletons fit people properly, so that they function safely and effectively, and so that people can get the most from these innovative products.”

The committee is working on several proposed standards and welcomes more participation from members of the exoskeleton community. For example, Billotte noted that the committee seeks experts in cybersecurity due to the growing need to secure data, controls, and biometrics in many exoskeletons.

ASTM proposes standards guide, center of excellence for exoskeletons

An exoskeleton vest at a BMW plant in in Spartanburg, S.C. Source: BMW

Call for an exoskeleton center of excellence

Last month, ASTM International called for proposals for an “Exo Technologies Center of Excellence.” The winner would receive up to $250,000 per year for up to five years. Full proposals are due today, and the winner will be announced in September, said ASTM.

“Now is the right time to create a hub of collaboration among startups, companies, and other entities that are exploring how exoskeletons could support factory workers, patients, the military, and many other people,” stated ASTM International President Katharine Morgan. “We look forward to this new center serving as a catalyst for game-changing R&D, standardization, related training, partnerships, and other efforts that help the world benefit from this exciting new technology.”

The center of excellence is intended to fill knowledge gaps, provide a global hub for education and a neutral forum to discuss common challenges, and provide a library of community resources. It should also coordinate global links among stakeholders, said ASTM.

West Conshohocken, Pa.-based ASTM International said it meets World Trade Organization (WTO) principles for developing international standards. The organization’s standards are used globally in research and development, product testing, quality systems, commercial transactions, and more.

The post ASTM International proposes standards guide, center of excellence for exoskeletons appeared first on The Robot Report.