TIAGo++ robot from PAL Robotics ready for two-armed tasks

Among the challenges for developers of mobile manipulation and humanoid robots is the need for an affordable and flexible research platform. PAL Robotics last month announced its TIAGo++, a robot that includes two arms with seven degrees of freedom each.

As with PAL Robotics‘ one-armed TIAGo, the new model is based on the Robot Operating System (ROS) and can be expanded with additional sensors and end effectors. TIAGo++ is intended to enable engineers to create applications that include a touchscreen interface for human-robot interaction (HRI) and require simultaneous perception, bilateral manipulation, mobility, and artificial intelligence.

In addition, TIAGo++ supports NVIDIA’s Jetson TX2 as an extra for machine learning and deep learning development. Tutorials for ROS and open-source simulation for TIAGo are available online.

Barcelona, Spain-based PAL, which was named a “Top 10 ROS-based robotics company to watch in 2019,” also makes the Reem and TALOS robots.

Jordi Pagès, product manager of the TIAGo robot at PAL Robotics responded to the following questions about TIAGo++ from The Robot Report:

For the development of TIAGo++, how did you collect feedback from the robotics community?

Pagès: PAL Robotics has a long history in research and development. We have been creating service robotics platforms since 2004. When we started thinking about the TIAGo robot development, we asked researchers from academia and industry which features would they expect or value in a platform for research.

Our goal with TIAGo has always been the same: to deliver a robust platform for research that easily adapts to diverse robotics projects and use cases. That’s why it was key to be in touch with the robotics and AI developers from start.

After delivering the robots, we usually ask for feedback and stay in touch with the research centers to learn about their activities and experiences, and the possible improvements or suggestions they would have. We do the same with the teams that use TIAGo for competitions like RoboCup or the European Robotics League [ERL].

At the same time, TIAGo is used in diverse European-funded projects where end users from different sectors, from healthcare to industry, are involved. This allows us to also learn from their feedback and keep finding new ways in which the platform could be of help in a user-centered way. That’s how we knew that adding a second arm into the TIAGo portfolio of its modular possibilities could be of help to the robotics community.

How long did it take PAL Robotics to develop the two-armed TIAGo++ in comparison with the original model?

Pagès: Our TIAGo platform is very modular and robust, so it took us just few months from taking the decision to having a working TIAGo++ ready to go. The modularity of all our robots and our wide experience developing humanoids usually helps us a lot in reducing the redesign and production time.

The software is also very modular, with extensive use of ROS, the de facto standard robotics middleware. Our customers are able to upgrade, modify, and substitute ROS packages. That way, they can focus their attention on their real research on perception, navigation, manipulation, HRI, and AI.

How high can TIAGo++ go, and what’s its reach?

Pagès: TIAGo++ can reach the floor and up to 1.75m [5.74 ft.] high with each arm, thanks to the combination of its 7 DoF [seven degrees of freedom] arms and its lifting torso. The maximum extension of each arm is 92cm [36.2 in.]. In our experience, this workspace allows TIAGo to work in several environments like domestic, healthcare, and industry.

TIAGo++ robot from PAL Robotics

The TIAGo can extend in height, and each arm has a reach of about 3 ft. Source: PAL Robotics

What’s the advantage of seven degrees of freedom for TIAGo’s arms over six degrees?

Pagès: A 7-DoF arm is much better in this sense for people who will be doing manipulation tasks. Adding more DoFs means that the robot can arrive to more poses — positions and orientations — of its arm and end-effector that it couldn’t reach before.

Also, this enables developers to reduce singularities, avoiding non-desired abrupt movements. This means that TIAGo has more possibilities to move its arm and reach a certain pose in space, with a more optimal combination of movements.

What sensors and motors are in the robot? Are they off-the-shelf or custom?

Pagès: All our mobile-based platforms, like the TIAGo robot, combine many sensors. TIAGo has a laser and sonars to move around and localize itself in space, an IMU [inertial measurement unit], and an RGB-D camera in the head. It can have a force/torque sensor on the wrist, especially useful to work in HRI scenarios. It also has a microphone and a speaker.

TIAGo has current sensing in every joint of the arm, enabling a very soft, effortless torque control on each of the arms. The possibility of having an expansion panel with diverse connectors makes it really easy for developers to add even more sensors to it, like a thermal camera or a gripper camera, once they have TIAGo in their labs.

About the motors, TIAGo++ makes use our custom joints integrating high-quality commercial components and our own electronic power management and control. All motors also have encoders to measure the current motor position.

What’s the biggest challenge that a humanoid like TIAGo++ can help with?

Pagès: TIAGo++ can help with are those tasks that require bi-manipulation, in combination with navigation, perception, HRI, or AI. Even though it is true that a one-arm robot can already perform a wide range of tasks, there are many actions in our daily life that require of two arms, or that are more comfortably or quickly done with two arms rather than one.

For example, two arms are good for grasping and carrying a box, carrying a platter, serving liquids, opening a bottle or a jar, folding clothes, or opening a wardrobe while holding an object. In the end, our world and tools have been designed for the average human body, which is with two arms, so TIAGo++ can adapt to that.

As a research platform based on ROS, is there anything that isn’t open-source? Are navigation and manipulation built in or modular?

Pagès: Most software is provided either open-sourced or with headers and dynamic libraries so that customers can develop applications making use of the given APIs or using the corresponding ROS interfaces at runtime.

For example, all the controllers in TIAGo++ are plugins of ros_control, so customers can implement their own controllers following our public tutorials and deploy them on the real robot or in the simulation.

Moreover, users can replace any ROS package by their own packages. This approach is very modular, and even if we provide navigation and manipulation built-in, developers can use their own navigation and manipulation instead of ours.

Did PAL work with NVIDIA on design and interoperability, or is that an example of the flexibility of ROS?

Pagès: It is both an example of how easy is to expand TIAGo with external devices and how easy is to integrate in ROS these devices.

One example of applications that our clients have developed using the NVIDIA Jetson TX2 is the “Bring me a beer” task from the Homer Team [at RoboCup], at the University of Koblenz-Landau. They made a complete application in which TIAGo robot could understand a natural language request, navigate autonomously to the kitchen, open the fridge, recognize and select the requested beer, grasp it, and deliver it back to the person who asked for it.

As a company, we work with multiple partners, but we also believe that our users should be able to have a flexible platform that allows them to easily integrate off-the-shelf solutions they already have.

How much software support is there for human-machine interaction via a touchscreen?

Pagès: The idea behind integrating a touchscreen on TIAGo++ is to bring customers the possibility to implement their own graphical interface, so we provide full access to the device. We work intensively with researchers, and we provide platforms as open as our customers need, such as a haptic interface.

What do robotics developers need to know about safety and security?

Pagès: A list of safety measures and best practices are provided in the Handbook of TIAGo robot in order that customers ensure safety both around the robot and for the robot itself.

TIAGo also features some implicit control modes that help to ensure safety while operation. For example, an effort control mode for the arms is provided so that collisions can be detected and the arm can be set in gravity compensation mode.

Furthermore, the wrist can include a six-axis force/torque sensor providing more accurate feedback about collisions or interactions of the end effector with the environment. This sensor can be also used to increase the safety of the robot. We provide this information to our customers and developers so they are always aware about the safety measures.

Have any TIAGo users moved toward commercialization based on what they’ve learned with PAL’s systems?

Pagès: At the moment, from the TIAGo family, we commercialize the TIAGo Base for intralogistics automation in indoor spaces such as factories or warehouses.

Some configurations of the TIAGo robot have been tested in pilots in healthcare applications. In the EnrichMe H2020 EU Project, the robot gave assistance to old people at home autonomously for up to approximately two months.

In robotics competitions such as the ERL, teams have shown the quite outstanding performance of TIAGo in accomplishing specific actions in a domestic environment. Two teams ended first and third in the RoboCup@Home OPL 2019 in Sydney, Australia. The Homer Team won for the third time in a row using TIAGo — see it clean a toilet here.

The CATIE Robotics Team ended up third in the first world championship in which it participated. For instance, in one task, it took out the trash.

The TIAGo robot is also used for European Union Horizon 2020 experiments in which collaborative robots that combine mobility with manipulation are used in industrial scenarios. This includes projects such as MEMMO for motion generation, Co4Robots for coordination, and RobMoSys for open-source software development.

Besides this research aspect, we have industrial customers that are using TIAGo to improve their manufacturing procedures.

How does TIAGo++ compare with, say, Rethink Robotics’ Baxter?

Pagès: With TIAGo++, besides the platform itself, you also get support, extra advanced software solutions, and assessment from a company that continues to be in the robotics sector since more than 15 years ago. Robots like the TIAGo++ also use our know-how both in software and hardware, a knowledge that the team has been gathering from the development of cutting-edge biped humanoids like the torque-controlled TALOS.

From a technical point of view, TIAGo++ was made very compact to suit environments shared with people such as homes. Baxter was a very nice entry-point platform and was not originally designed to be a mobile manipulator but a fixed one. TIAGo++ can use the same navigation used in our commercial autonomous mobile robot for intralogistics tasks, the TIAGo Base.

Besides, TIAGo++ is a fully customizable robot in all aspects: You can select the options you want in hardware and software, so you get the ideal platform you want to have in your robotics lab. For a mobile manipulator with two 7-DoF arms, force/torque sensors, ROS-based, affordable, and with community support, we believe TIAGo++ should be a very good option.

The TIAGo community is growing around the world, and we are sure that we will see more and more robots helping people in different scenarios very soon.

What’s the price point for TIAGo++?

Pagès: The starting price is around €90,000 [$100,370 U.S.]. It really depends on the configuration, devices, computer power, sensors, and extras that each client can choose for their TIAGo robot, so the price can vary.

The post TIAGo++ robot from PAL Robotics ready for two-armed tasks appeared first on The Robot Report.

Anki shutdown: how the robotics world is reacting

Dealing yet another massive blow to the consumer robotics industry, Anki shut down after raising $200 million in funding since it was founded in 2010. A new round of funding reportedly fell through at the last minute, leaving the San Francisco-based company with no other option but to lay off its entire staff on Wednesday.

Other recent consumer robotics failures such as Jibo, Keecker, Laundroid and Mayfield Robotics pale in comparison to Anki going out of business. Anki said it had sold more than 1.5 million robots as of late 2018 and made nearly $100 million in revenue in 2017 and expected to exceed that figure in 2018.

As you can imagine, the robotics industry has been reacting to the Anki shutdown all over social media. Many were shocked, many were not, and some are sharing lessons to be learned. Below is a snapshot of the social media reaction. If you’d like to share your thoughts about Anki, please leave a message in the comments.

Giving robots a better feel for object manipulation


A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch – and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials – “particles” – interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration – such as a “T” shape – that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to build this type of intuitive model for robots to enable them to do what humans can do.”

“When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

A new “particle simulator” developed by MIT improves robots’ abilities to mold materials into simulated target shapes and interact with solid objects and liquids. This could give robots a refined touch for industrial applications or for personal robotics. | Credit: MIT

Dynamic graphs

A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material – rigid, deformable, and liquid – to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”

Editor’s Note: This article was republished with permission from MIT News.

Build better robots by listening to customer backlash

In the wake of the closure of Apple’s autonomous car division (Project Titan) this week, one questions if Steve Jobs’ axiom still holds true. “Some people say, ‘Give the customers what they want.’ But that’s not my approach. Our job is to figure out what they’re going to want before they do,” declared Jobs, who continued with an analogy: “I think Henry Ford once said, ‘If I’d asked customers what they wanted, they would have told me, ‘a faster horse!’” Titan joins a growing graveyard of autonomous innovations, which is filled with the tombstones of BaxterJiboKuri and many broken quadcopters. If anything holds true, not every founder is Steve Jobs or Henry Ford and listening to public backlash could be a bellwether for success.

Adam Jonas of Morgan Stanley announced on Jan. 9, 2019 from the Consumer Electronic Show (CES) floor, “It’s official. AVs are overhyped. Not that the safety, economic, and efficiency benefits of robotaxis aren’t valid and noble. They are. It’s the timing… the telemetry of adoption for L5 cars without safety drivers expected by many investors may be too aggressive by a decade… possibly decades.”

The timing sentiment is probably best echoed by the backlash by the inhabitants of Chandler, Arizona who have been protesting vocally, even resorting to violence, against Waymo’s self-driving trials on their streets. This rancor came to a head in August when a 69-year-old local pointed his pistol at the robocar (and its human safety driver).

In a profile of the Arizona beta trial, The New York Times interviewed some of the loudest advocates against Waymo in the Phoenix suburb. Erik and Elizabeth O’Polka expressed frustration with their elected leaders in turning their neighbors and their children into guinea pigs for artificial intelligence.

Elizabeth adamantly decried, “They didn’t ask us if we wanted to be part of their beta test.” Her husband strongly agreed: “They said they need real-world examples, but I don’t want to be their real-world mistake.” The couple has been warned several times by the Chandler police to stop attempting to run Waymo cars off the road. Elizabeth confessed to the Times, “that her husband ‘finds it entertaining to brake hard’ in front of the self-driving vans, and that she herself ‘may have forced them to pull over’ so she could yell at them to get out of their neighborhood.” The reporter revealed that the backlash tensions started to boil “when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac.”

Rethink's Baxter robot was the subject of a user backlash because of design limitations.

The deliberate sabotaging by the O’Polkas could be indicative of the attitudes of millions of citizens who feel ignored by the speed of innovation. Deployments that run oblivious to this view, relying solely on the excitement of investors and insiders, ultimately face backlash when customers flock to competitors.

In the cobot world, the early battle between Rethink Robotics and Universal Robots (UR) is probably one of the most high-flying examples of tone-deaf invention by engineers. Rethink’s eventual demise was a classic case of form over function with a lot of hype sprinkled on top.

Rodney Brooks‘ collaborative robotics enterprise raised close to $150 million in its short decade-long existence. The startup rode the coattails of fame of its co-founder, who is often referred to as the godfather of robotics, before ever delivering a product.

Dedicated Rethink distributor, Dan O’Brien, recalled, “I’ve never seen a product get so much publicity. I fell in love with Rethink in 2010.” Its first product, Baxter, released in 2012 and promised to bring safety, productivity, and a little whimsy to the factory floor. The robot stood at around six feet tall with two bright colored red arms that were connected to an animated screen complete with friendly facial expressions.

At the same time, Rethink’s robots were not able to perform as advertised in industrial environments, leading to a backlash and slow adoption. The problem stemmed from Brooks’ insistence in licensing their actuation technology, “Series Elastic Actuators (SEAs),” from former employer MIT instead of embracing the leading actuator, Harmonic Drive, for its mobility. Users demanded greater exactness in their machines that competitors such as UR, a Harmonic customer, took the helm in delivering.

Universal Robots' cobot arms don't have the problems that led to a backlash against Rethink's robots

Universal Robots’ cobots perform better than those of the late Rethink Robotics.

The backlash to Baxter is best illustrated by the comments of Steve Leach, president of Numatic Engineering, an automation integrator. In 2010, Leach hoped that Rethink could be “the iPhone of the industrial automation world.”

However, “Baxter wasn’t accurate or smooth,” said Leach, who was dismayed after seeing the final product. “After customers watched the demo, they lost interest because Baxter was not able to meet their needs.”

“We signed on early, a month before Baxter was released, and thought the software and mechanics would be refined. But they were not,” sighed Leach. In the six years since Baxter’s disappointing launch Rethink did little to address the SEAs problem. Most of the 1,000 Baxters sold by Rethink were delivered to academia, not the commercial industry.

By contrast, Universal booked more 27,000 robots since its founding in 2005. Even Leach, who spent a year passionately trying to sell a single Baxter unit, switched to UR and sold his first one within a week. Leach elaborated, “From the ground up, UR’s firmware and hardware were specifically developed for industrial applications and met the expectations of those customers. That’s really where Rethink missed the mark.”

This garbage can robot seen at CES was designed to be cheap and avoid consumer backlash.

As machines permeate human streets, factories, offices, and homes, building a symbiotic relationship between intended operators and creators is even more critical. Too often, I meet entrepreneurs who demonstrate concepts with little input from potential buyers. This past January, the aisles of CES were littered with such items, but the one above was designed with a potential backlash in mind.

Simplehuman, the product development firm known for its elegantly designed housewares, unveiled a $200 aluminum robot trash can. This is part of a new line of Simplehuman’s own voice-activated products, potentially competing with Amazon Alexa. In the words of its founder, Frank Yang, “Sometimes, it’s just about pre-empting the users’ needs, and including features we think they would appreciate. If they don’t, we can always go back to the drawing board and tweak the product again.”

To understand the innovation ecosystem in the age of hackers join the next RobotLab series on “Cybersecurity & Machines” with John Frankel of ffVC and Guy Franklin of SOSA – February 12th in New York City, seating is limited so RSVP today!

The post Build better robots by listening to customer backlash appeared first on The Robot Report.