U.S. Robotics Roadmap calls for white papers for revision

U.S. Robotics Roadmap calls for white papers for revision

The U.S. National Robotics Roadmap was first created 10 years ago. Since then, government agencies, universities, and companies have used it as a reference for where robotics is going. The first roadmap was published in 2009 and then revised in 2013 and 2016. The objective is to publish the fourth version of the roadmap by summer 2020.

The team developing the U.S. National Robotics Roadmap has put out a call to engage about 150 to 200 people from academia and industry to ensure that it is representative of the robotics community’s view of the future. The roadmap will cover manufacturing, service, medical, first-responder, and space robotics.

The revised roadmap will also include considerations related to ethics and workforce. It will cover emerging applications, the key challenges to progress, and what research and development is needed.

Join community workshops

Three one-and-a-half-day workshops will be organized for community input to the roadmap. The workshops will take place as follows:

  • Sept. 11-12 in Chicago (organized by Nancy Amato, co-director of the Parasol Lab at Texas A&M University and head of the Department of Computer Science at the University of Ilinois at Urbana-Champaign)
  • Oct. 17-18 in Los Angeles (organized by Maja Mataric, Chan Soon-Shiong distinguished professor of computer science, neuroscience, and pediatrics at the University of Southern California)
  • Nov. 15-16 in Lowell, Mass. (organized by Holly Yanco, director of the NERVE Center at the University of Massachusetts Lowell)

Participation in these workshops will be by invitation only. To participate, please submit a white paper/position statement of a maximum length of 1.5 pages. What are key use cases for robotics in a five-to-10-year perspective, what are key limitations, and what R&D is needed in that time frame? The white paper can address all three aspects or focus on one of them. The white paper must include the following information:

  • Name, affiliation, and e-mail address
  • A position statement (1.5 pages max)

Please submit the white paper as regular text or as a PDF file. Statements that are too long will be ignored. Position papers that only focus on current research are not appropriate. A white paper should present a future vision and not merely discuss state of the art.

White papers should be submitted by end of the day Aug. 15, 2019, to roadmapping@robotics-vo.org. Late submissions may not be considered. We will evaluate submitted white papers by Aug. 18 and select people for the workshops by Aug. 19.

Roadmap revision timeline

The workshop reports will be used as the basis for a synthesis of a new roadmap. The nominal timeline is:

  • August 2019: Call for white papers
  • September – November 2019: Workshops
  • December 2019: Workshops reports finalized
  • January 2020: Synthesis meeting at UC San Diego
  • February 2020: Publish draft roadmap for community feedback
  • April 2020: Revision of roadmap based on community feedback
  • May 2020: Finalize roadmap with graphics design
  • July 2020: Publish roadmap

If you have any questions about the process, the scope, etc., please send e-mail to Henrik I Christensen at hichristensen@eng.ucsd.edu.

U.S. Robotics Roadmap calls for reviewers

Henrik I Christensen spoke at the Robotics Summit & Expo in Boston.

Editor’s note: Christensen, Qualcomm Chancellor’s Chair of Robot Systems at the University of California San Diego and co-founder of Robust AI, delivered a keynote address at last month’s Robotics Summit & Expo, produced by The Robot Report.

The post U.S. Robotics Roadmap calls for white papers for revision appeared first on The Robot Report.

Wearable device could improve communication between humans, robots

An international team of scientists has developed an ultra-thin, wearable electronic device that facilitates smooth communication between humans and machines. The researchers said the new device is easy to manufacture and imperceptible when worn. It could be applied to human skin to capture various types of physical data for better health monitoring and early disease detection, or it could enable robots to perform specific tasks in response to physical cues from humans.

Wearable human-machine interfaces have had challenges — some are made from rigid electronic chips and sensors that are uncomfortable and restrict the body’s motion, while others consist of softer, more wearable elastic materials but suffer from slow response times.

While researchers have developed thin inorganic materials that wrinkle and bend, the challenge remains to develop wearable devices with multiple functions that enable smooth communication between humans and machines.

The team that wrote the paper included Kyoseung Sim, Zhoulyu Rao; Faheem Ershad; Jianming Lei, Anish Thukral, Jie Chen, and Cunjiang Yu at University of Houston. It also included Zhanan Zou and Jianling Xiao at University of Colorado, Boulder, and Qing-An Huang at Southeast University in Nanjing, China.

Wearable nanomembrane reads human muscle signals

Kyoseung Sim and company have designed a nanomembrane made from indium zinc oxide using a chemical processing approach that allows them to tune the material’s texture and surface properties. The resulting devices were only 3 to 4 micrometers thick, and snake-shaped, properties that allow them to stretch and remain unnoticed by the wearer.

When worn by humans, the devices could collect signals from muscle and use them to directly guide a robot, enabling the user to feel what the robot hand experienced. The devices maintain their function when human skin is stretched or compressed.

Wearable device could improve communication between humans, robots

Soft, unnoticeable, multifunctional, electronics-based, wearable human-machine interface devices. Credit: Cunjiang Yu

The researchers also found that sensors made from this nanomembrane material could be designed to monitor UV exposure (to mitigate skin disease risk) or to detect skin temperature (to provide early medical warnings), while still functioning well under strain.

Editor’s note: This month’s print issue of The Robot Report, which is distributed with Design World, focuses on exoskeletons. It will be available soon.

The post Wearable device could improve communication between humans, robots appeared first on The Robot Report.

LUKE prosthetic arm has sense of touch, can move in response to thoughts

Keven Walgamott had a good “feeling” about picking up the egg without crushing it. What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of LUKE, a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by University of Utah biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (named after the robotic hand that Luke Skywalker got in The Empire Strikes Back) to mimic the way a human hand feels objects by sending the appropriate signals to the brain.

Their findings were published in a new paper co-authored by University of Utah biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark, and other colleagues in the latest edition of the journal Science Robotics.

Sending the right messages

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up, and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the University of Utah, was able to pluck grapes without crushing them, pick up an egg without cracking it, and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

How those things are accomplished is through a complex series of mathematical calculations and modeling.

Kevin Walgamott LUKE arm

Kevin . Walgamott wears the LUKE prosthetic arm. Credit: University of Utah Center for Neural Interfaces

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the University of Utah team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by University of Utah biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array.

The Array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the Array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the University of Utah’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering, and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Editor’s note: Reposted from the University of Utah.

The post LUKE prosthetic arm has sense of touch, can move in response to thoughts appeared first on The Robot Report.

TIAGo++ robot from PAL Robotics ready for two-armed tasks

Among the challenges for developers of mobile manipulation and humanoid robots is the need for an affordable and flexible research platform. PAL Robotics last month announced its TIAGo++, a robot that includes two arms with seven degrees of freedom each.

As with PAL Robotics‘ one-armed TIAGo, the new model is based on the Robot Operating System (ROS) and can be expanded with additional sensors and end effectors. TIAGo++ is intended to enable engineers to create applications that include a touchscreen interface for human-robot interaction (HRI) and require simultaneous perception, bilateral manipulation, mobility, and artificial intelligence.

In addition, TIAGo++ supports NVIDIA’s Jetson TX2 as an extra for machine learning and deep learning development. Tutorials for ROS and open-source simulation for TIAGo are available online.

Barcelona, Spain-based PAL, which was named a “Top 10 ROS-based robotics company to watch in 2019,” also makes the Reem and TALOS robots.

Jordi Pagès, product manager of the TIAGo robot at PAL Robotics responded to the following questions about TIAGo++ from The Robot Report:

For the development of TIAGo++, how did you collect feedback from the robotics community?

Pagès: PAL Robotics has a long history in research and development. We have been creating service robotics platforms since 2004. When we started thinking about the TIAGo robot development, we asked researchers from academia and industry which features would they expect or value in a platform for research.

Our goal with TIAGo has always been the same: to deliver a robust platform for research that easily adapts to diverse robotics projects and use cases. That’s why it was key to be in touch with the robotics and AI developers from start.

After delivering the robots, we usually ask for feedback and stay in touch with the research centers to learn about their activities and experiences, and the possible improvements or suggestions they would have. We do the same with the teams that use TIAGo for competitions like RoboCup or the European Robotics League [ERL].

At the same time, TIAGo is used in diverse European-funded projects where end users from different sectors, from healthcare to industry, are involved. This allows us to also learn from their feedback and keep finding new ways in which the platform could be of help in a user-centered way. That’s how we knew that adding a second arm into the TIAGo portfolio of its modular possibilities could be of help to the robotics community.

How long did it take PAL Robotics to develop the two-armed TIAGo++ in comparison with the original model?

Pagès: Our TIAGo platform is very modular and robust, so it took us just few months from taking the decision to having a working TIAGo++ ready to go. The modularity of all our robots and our wide experience developing humanoids usually helps us a lot in reducing the redesign and production time.

The software is also very modular, with extensive use of ROS, the de facto standard robotics middleware. Our customers are able to upgrade, modify, and substitute ROS packages. That way, they can focus their attention on their real research on perception, navigation, manipulation, HRI, and AI.

How high can TIAGo++ go, and what’s its reach?

Pagès: TIAGo++ can reach the floor and up to 1.75m [5.74 ft.] high with each arm, thanks to the combination of its 7 DoF [seven degrees of freedom] arms and its lifting torso. The maximum extension of each arm is 92cm [36.2 in.]. In our experience, this workspace allows TIAGo to work in several environments like domestic, healthcare, and industry.

TIAGo++ robot from PAL Robotics

The TIAGo can extend in height, and each arm has a reach of about 3 ft. Source: PAL Robotics

What’s the advantage of seven degrees of freedom for TIAGo’s arms over six degrees?

Pagès: A 7-DoF arm is much better in this sense for people who will be doing manipulation tasks. Adding more DoFs means that the robot can arrive to more poses — positions and orientations — of its arm and end-effector that it couldn’t reach before.

Also, this enables developers to reduce singularities, avoiding non-desired abrupt movements. This means that TIAGo has more possibilities to move its arm and reach a certain pose in space, with a more optimal combination of movements.

What sensors and motors are in the robot? Are they off-the-shelf or custom?

Pagès: All our mobile-based platforms, like the TIAGo robot, combine many sensors. TIAGo has a laser and sonars to move around and localize itself in space, an IMU [inertial measurement unit], and an RGB-D camera in the head. It can have a force/torque sensor on the wrist, especially useful to work in HRI scenarios. It also has a microphone and a speaker.

TIAGo has current sensing in every joint of the arm, enabling a very soft, effortless torque control on each of the arms. The possibility of having an expansion panel with diverse connectors makes it really easy for developers to add even more sensors to it, like a thermal camera or a gripper camera, once they have TIAGo in their labs.

About the motors, TIAGo++ makes use our custom joints integrating high-quality commercial components and our own electronic power management and control. All motors also have encoders to measure the current motor position.

What’s the biggest challenge that a humanoid like TIAGo++ can help with?

Pagès: TIAGo++ can help with are those tasks that require bi-manipulation, in combination with navigation, perception, HRI, or AI. Even though it is true that a one-arm robot can already perform a wide range of tasks, there are many actions in our daily life that require of two arms, or that are more comfortably or quickly done with two arms rather than one.

For example, two arms are good for grasping and carrying a box, carrying a platter, serving liquids, opening a bottle or a jar, folding clothes, or opening a wardrobe while holding an object. In the end, our world and tools have been designed for the average human body, which is with two arms, so TIAGo++ can adapt to that.

As a research platform based on ROS, is there anything that isn’t open-source? Are navigation and manipulation built in or modular?

Pagès: Most software is provided either open-sourced or with headers and dynamic libraries so that customers can develop applications making use of the given APIs or using the corresponding ROS interfaces at runtime.

For example, all the controllers in TIAGo++ are plugins of ros_control, so customers can implement their own controllers following our public tutorials and deploy them on the real robot or in the simulation.

Moreover, users can replace any ROS package by their own packages. This approach is very modular, and even if we provide navigation and manipulation built-in, developers can use their own navigation and manipulation instead of ours.

Did PAL work with NVIDIA on design and interoperability, or is that an example of the flexibility of ROS?

Pagès: It is both an example of how easy is to expand TIAGo with external devices and how easy is to integrate in ROS these devices.

One example of applications that our clients have developed using the NVIDIA Jetson TX2 is the “Bring me a beer” task from the Homer Team [at RoboCup], at the University of Koblenz-Landau. They made a complete application in which TIAGo robot could understand a natural language request, navigate autonomously to the kitchen, open the fridge, recognize and select the requested beer, grasp it, and deliver it back to the person who asked for it.

As a company, we work with multiple partners, but we also believe that our users should be able to have a flexible platform that allows them to easily integrate off-the-shelf solutions they already have.

How much software support is there for human-machine interaction via a touchscreen?

Pagès: The idea behind integrating a touchscreen on TIAGo++ is to bring customers the possibility to implement their own graphical interface, so we provide full access to the device. We work intensively with researchers, and we provide platforms as open as our customers need, such as a haptic interface.

What do robotics developers need to know about safety and security?

Pagès: A list of safety measures and best practices are provided in the Handbook of TIAGo robot in order that customers ensure safety both around the robot and for the robot itself.

TIAGo also features some implicit control modes that help to ensure safety while operation. For example, an effort control mode for the arms is provided so that collisions can be detected and the arm can be set in gravity compensation mode.

Furthermore, the wrist can include a six-axis force/torque sensor providing more accurate feedback about collisions or interactions of the end effector with the environment. This sensor can be also used to increase the safety of the robot. We provide this information to our customers and developers so they are always aware about the safety measures.

Have any TIAGo users moved toward commercialization based on what they’ve learned with PAL’s systems?

Pagès: At the moment, from the TIAGo family, we commercialize the TIAGo Base for intralogistics automation in indoor spaces such as factories or warehouses.

Some configurations of the TIAGo robot have been tested in pilots in healthcare applications. In the EnrichMe H2020 EU Project, the robot gave assistance to old people at home autonomously for up to approximately two months.

In robotics competitions such as the ERL, teams have shown the quite outstanding performance of TIAGo in accomplishing specific actions in a domestic environment. Two teams ended first and third in the RoboCup@Home OPL 2019 in Sydney, Australia. The Homer Team won for the third time in a row using TIAGo — see it clean a toilet here.

The CATIE Robotics Team ended up third in the first world championship in which it participated. For instance, in one task, it took out the trash.

The TIAGo robot is also used for European Union Horizon 2020 experiments in which collaborative robots that combine mobility with manipulation are used in industrial scenarios. This includes projects such as MEMMO for motion generation, Co4Robots for coordination, and RobMoSys for open-source software development.

Besides this research aspect, we have industrial customers that are using TIAGo to improve their manufacturing procedures.

How does TIAGo++ compare with, say, Rethink Robotics’ Baxter?

Pagès: With TIAGo++, besides the platform itself, you also get support, extra advanced software solutions, and assessment from a company that continues to be in the robotics sector since more than 15 years ago. Robots like the TIAGo++ also use our know-how both in software and hardware, a knowledge that the team has been gathering from the development of cutting-edge biped humanoids like the torque-controlled TALOS.

From a technical point of view, TIAGo++ was made very compact to suit environments shared with people such as homes. Baxter was a very nice entry-point platform and was not originally designed to be a mobile manipulator but a fixed one. TIAGo++ can use the same navigation used in our commercial autonomous mobile robot for intralogistics tasks, the TIAGo Base.

Besides, TIAGo++ is a fully customizable robot in all aspects: You can select the options you want in hardware and software, so you get the ideal platform you want to have in your robotics lab. For a mobile manipulator with two 7-DoF arms, force/torque sensors, ROS-based, affordable, and with community support, we believe TIAGo++ should be a very good option.

The TIAGo community is growing around the world, and we are sure that we will see more and more robots helping people in different scenarios very soon.

What’s the price point for TIAGo++?

Pagès: The starting price is around €90,000 [$100,370 U.S.]. It really depends on the configuration, devices, computer power, sensors, and extras that each client can choose for their TIAGo robot, so the price can vary.

The post TIAGo++ robot from PAL Robotics ready for two-armed tasks appeared first on The Robot Report.

Challenges of building haptic feedback for surgical robots


Minimally invasive surgery (MIS) is a modern technique that allows surgeons to perform operations through small incisions (usually 5-15 mm). Although it has numerous advantages over older surgical techniques, MIS can be more difficult to perform. Some inherent drawbacks are:

  • Limited motion due to straight laparoscopic instruments and fixation enforced by the small incision in the abdominal wall
  • Impaired vision, due the two-dimensional imaging
  • Usage of long instruments amplifies the effects of surgeon’s tremor
  • Poor ergonomics imposed to the surgeon
  • Loss of haptic feedback, which is distorted by friction forces on the instrument and reactionary forces from the abdominal wall.

Minimally Invasive Robotic Surgery (MIRS) offers solutions to either minimize or eliminate many of the pitfalls associated with traditional laparoscopic surgery. MIRS platforms such as Intuitive Surgical’s da Vinci, approved by the U.S. Food and Drug Administration in 2000, represent a historical milestone of surgical treatments. The ability to leverage laparoscopic surgery advantages while augmenting surgeons’ dexterity and visualization and eliminating the ergonomic discomfort of long surgeries, makes MIRS undoubtedly an essential technology for the patient, surgeons and hospitals.

However, despite all improvements brought by currently commercially available MIRS, haptic feedback is still a major limitation reported by robot-assisted surgeons. Because the interventionist no longer manipulates the instrument directly, the natural haptic feedback is eliminated. Haptics is a conjunction of both kinesthetic (form and shape of muscles, tissues and joints) as well as tactile (cutaneous texture and fine detail) perception and is a combination of many physical variables such as force, distributed pressure, temperature and vibration.

Direct benefits of sensing interaction forces at the surgical end-effector are:

  • Improved organic tissue characterization and manipulation
  • Assessment of anatomical structures
  • Reduction of sutures breakage
  • Overall increase on the feeling of assisted robotics surgery.

Haptic feedback also plays a fundamental role in shortening the learning curve for young surgeons in MIRS training. A tertiary benefit of accurate real-time direct force measurement is that the data collected from these sensors can be utilized to produce accurate tissue and organ models for surgical simulators used in MIS training. Futek Advanced Sensor Technology, an Irvine, Calif.-based sensor manufacturer, shared these tips on how to design and manufacture haptic sensors for surgical robotics platforms.

With a force, torque and pressure sensor enabling haptic feedback to the hands of the surgeon, robotic minimally invasive surgery can be performed with higher accuracy and dexterity while minimizing trauma to the patient. | Credit: Futek

Technical and economic challenges of haptic feedback

Adding to the inherent complexity of measuring haptics, engineers and neuroscientists also face important issues that require consideration prior to the sensor design and manufacturing stages. The location of the sensing element, which significantly influences the measurement consistency, presents MIRS designers with a dilemma: should they place the sensor outside the abdomen wall near the actuation mechanism driving the end-effector (a.k.a. Indirect Force Sensing), or inside the patient at the instrument tip, embedded on the end-effector (a.k.a. Direct Force Sensing).

The pros and cons of these two approaches are associated with measurement accuracy, size restrictions and sterilization and biocompatibility requirements. Table 1 compares these two force measurement methods.

In the MIRS applications, where very delicate instrument-tissue interaction forces need to give precise feedback to the surgeon, measurement accuracy is sine qua non, which makes intra-abdominal direct sensing the ideal option.

However, this novel approach not only brings the design and manufacturing challenges described in Table 1 but also demands higher reusability. Commercially available MIRS systems that are modular in design allow the laparoscopic instrument to be reutilized approximately 12 to 20 times. Adding the sensing element near to the end-effector invariably increases the cost of the instrument and demands further consideration during the design stage in order to enhance sensor reusability.

Appropriate electronic components, strain measurement method and electrical connections have to withstand additional autoclavable cycles as well as survive a high PH washing. Coping with these special design requirements invariably increases the unitary cost per sensor. However, extended lifespan and number of cycles consequently reduces the cost per cycle and brings financial affordability to direct measurement method.

Hermeticity of high precision sub-miniature load sensing elements is equally challenging to intra-abdominal direct force measurement. The conventional approach to sealing electronic components is the adoption of conformal coatings, which are extensively used in submersible devices. As much as this solution provides protection in low-pressure water submersion environments for consumer electronics, coating protection is not sufficiently airtight and is not suitable for high-reliability medical, reusable and sterilizable solutions.

Under extreme process controls, conformal coatings have shown to be marginal and provide upwards of 20 to 30 autoclave cycles. The autoclave sterilization process presents a harsher physicochemical environment using high pressure and high temperature saturated steam. Similar to helium leak detection technology, saturated steam particles are much smaller in size compared to water particles and are capable of penetrating and degrading the coating over time causing the device to fail in a hardly predictable manner.

An alternative and conventional approach to achieving hermeticity is to weld on a header interface to the sensor. Again, welding faces obstacles in miniaturized sensors due to its size constraints. All in all, a novel and robust approach is a monolithic sensor using custom formulated, Ct matched, chemically neutral, high temperature fused isolator technology used to feed electrical conductors through the walls of the hermetically sealed active sensing element. The fused isolator technology has shown reliability in the hundreds to thousands of autoclave cycles.


The Robot Report launched the Healthcare Robotics Engineering Forum (Dec. 9-10 in Santa Clara, Calif.). The conference and expo focuses on improving the design, development and manufacture of next-generation healthcare robots. The Healthcare Robotics Engineering Forum is currently accepting speaking proposals through July 26, 2019. To submit a proposal, fill out this form.


Other design considerations for haptic feedback

As aforementioned, miniaturization, biocompatibility, autoclavability and high reusability are some of the unique characteristics imposed to a haptic sensor by the surgical environment. In addition, it is imperative that designers also meet requirements that are inherent to any high-performance force measurement device.

Extraneous loads (or crosstalk) compensation, provides optimal resistance to off-axis loads to assure maximum operating life and minimize reading errors. Force and torque sensors are engineered to capture forces along the Cartesian axes, typically X, Y and Z. From these three orthogonal axes, one to six measurement channels derives three force channels (Fx, Fy and Fz) and three torque or moment channels (Mx, My and Mz). Theoretically, a load applied along one of the axes should not produce a measurement in any of the other channels, but this is not always the case. For a majority of force sensors, this undesired cross-channel interference will be between 1 and 5% and, considering that one channel can capture extraneous loads from five other channels, the total crosstalk could be as high as 5 to 25%.

In robotic surgery, the sensor must be designed to negate the extraneous or cross-talk loads, which include frictions between the end-effector instrument and trocar, reactionary forces from the abdominal wall and gravitational effect of mass along the instrument axis. In some occasions, miniaturized sensors are very limited in space and have to compensate side loads using alternate methods such as electronic or algorithmic compensation.

haptic sensorsCalibration of direct inline force sensor imposes restrictions as well. The calibration fixtures are optimized with SR buttons to direct load precisely through the sensor of the part. If the calibration assembly is not equipped with such arrangements, the final calibration might be affected by parallel load paths.

Thermal effect is also a major challenge in strain measurement. Temperature variations cause material expansion, gage factor coefficient variation and other undesirable effects on the measurement result. For this reason, temperature compensation is paramount to ensure accuracy and long-term stability even when exposed to severe ambient temperature oscillations.

The measures to counteract temperature effects on the readings are:

  • The use of high-quality, custom and self-compensated strain gages compatible with the thermal expansion coefficient of the sensing element material
  • Use of half or full Wheatstone bridge circuit configuration installed in both load directions (tension and compression) to correct for temperature drift
  • Fully internally temperature compensation of zero balance and output range without the necessity of external conditioning circuitry.

In some special cases, the use of custom strain gages with reduced solder connections helps reduce temperature impacts from solder joints. Usually, a regular force sensor with four individual strain gages has upwards of 16 solder joints, while custom strain elements can reduce this down to less than six. This design consideration improves reliability as the solder joint, as an opportunity for failure, is significantly reduced.

During the design phase, it is also imperative to consider such sensors to meet high reliability along with high-volume manufacturability, taking into consideration the equipment and processes that will be required should a device be designated for high-volume manufacturing. The automated, high-volume processes could be slightly or significantly different than the benchtop or prototype equipment used for producing lower volumes. The scalability must maintain focus on reducing failure points during the manufacturing process, along with failure points that could occur on the field.

Testing for medical applications is more related to the ability of a measurement device that can withstand a high number of cycles rather than resist to strenuous structural stress. In particular for medical sensors, the overload and fatigue testing must be performed in conjunction with the sterilization testing in an intercalated process with several cycles of fatigue and sterilization testing. The ability to survive hundreds of overload cycles while maintaining hermeticity translates into a failure-free, high- reliability sensor with lower MTBF and more competitive total cost of ownership.

haptic sensors

Credit: Futek

Product development challenges

Although understanding the inherent design challenges of the haptic autoclavable sensor is imperative, the sensor manufacturer must be equipped with a talented multidisciplinary engineering team, in-house manufacturing capabilities supported by fully developed quality processes and product/project management proficiency to handle the complex, resource-limited, and fast-paced new product development environment.

A multidisciplinary approach will result in a sensor element that meets the specifications in terms of nonlinearity, hysteresis, repeatability and cross-talk, as well as an electronic instrument that delivers analog and digital output, high sampling rate and bandwidth, high noise-free resolution and low power consumption, both equally necessary for a reliable turnkey haptics measurement solution.

Strategic control of all manufacturing processes (machining, lamination, wiring, calibration), allows manufacturers to engineer sensors with a design for manufacturability (DFM) mentality. This strategic control of manufacturing boils down to methodically selecting the bill of material, defining the testing plans, complying with standards and protocols and ultimately strategizing the manufacturing phase based on economic constraints.

The post Challenges of building haptic feedback for surgical robots appeared first on The Robot Report.

Electronic skin could give robots an exceptional sense of touch


electronic skin

The National University of Singapore developed the Asynchronous Coded Electronic Skin, an artificial nervous system that could give robots an exceptional sense of touch. | Credit: National University of Singapore.

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by researchers at the National University of Singapore (NUS).

The new electronic skin system has ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from NUS Materials Science and Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hopes of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, unlike the nerve bundles in the human skin, it is made up of a network of sensors connected via a single electrical conductor.. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology, N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

Related: Challenges of building haptic feedback for surgical robots

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contact between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

The ACES developed by Asst. Professor Tee (left) and his team responds 1000 times faster than the human sensory nervous system. | Credit: National University of Singapore

Smart electronic skins for robots and prosthetics

ACES has a simple wiring system and remarkable responsiveness even with increasing numbers of sensors. These key characteristics will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

<strong>Related:</strong> <a href=”https://www.therobotreport.com/university-of-texas-austin-patent-gives-robots-ultra-sensitive-skin/”>UT Austin Patent Gives Robots Ultra-Sensitive Skin</a>

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

Editor’s Note: This article was republished from the National University of Singapore.

The post Electronic skin could give robots an exceptional sense of touch appeared first on The Robot Report.

Self-driving cars may not be best for older drivers, says Newcastle University study

Self-driving cars may not be best for older drivers, says Newcastle University study

VOICE member Ian Fairclough and study lead Dr. Shuo Li in test of older drivers. Source: Newcastle University

With more people living longer, driving is becoming increasingly important in later life, helping older drivers to stay independent, socially connected and mobile.

But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.

“In the U.K., older drivers are tending to drive more often and over longer distances, but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations,” explained Dr Shuo Li, an expert in intelligent transport systems at Newcastle University.

“Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we’ve seen older drivers take to minimize risk. But this can be quite limiting for people.”

Potential game-changer

Self-driving cars are seen as a potential game-changer for this age group, Li noted. Fully automated, they are unlikely to require a license and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.

But it’s not as clear-cut as it seems, said Li.

“There are several levels of automation, ranging from zero where the driver has complete control, through to Level 5, where the car is in charge,” he explained. “We’re some way-off Level 5, but Level 3 may be a trend just around the corner.  This will allow the driver to be completely disengaged — they can sit back and watch a film, eat, even talk on the phone.”

“But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds,” he added. “For younger people that switch between tasks is quite easy, but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor.”

Newcastle University DriveLAB tests older drivers

Led by Newcastle University’s Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take back control of an automated car in different scenarios and also the quality of their driving in these different situations.

Using the University’s state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).

They experienced automated driving for a short period and were then asked to “take back” control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.

The starting point in all situations was “total disengagement” — turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.

The time taken to regain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), “active input” such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).

“In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers,” said Li. “Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph, that means our older drivers would have needed an extra 35m warning distance — that’s equivalent to the length of 10 cars.

“But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident,” he said.

In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups.

In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.

Newcastle University DriveLab

VOICE member Pat Wilkinson. Source: Newcastle University

Designing automated cars of the future

The research team also explored older drivers’ opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.

Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what’s happening on the road and where they are even when they are busy with another activity.

The research team are now looking at how the vehicles can be improved to overcome some of these problems and better support older drivers when the automated cars hit our roads.

“I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” said Newcastle University Prof. Phil Blythe, who led the study and is chief scientific advisor for the U.K. Department for Transport. “The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.”

“Two pillars of the Government’s Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge,” he added. “Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties.”

“It is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” — Newcastle University Prof. Phil Blythe

Case studies of older drivers

Pat Wilkinson, who lives in Rowland’s Gill, County Durham, has been supporting the DriveLAB research for almost nine years.

Now 74, the former Magistrate said it’s interesting to see how technology is changing and gradually taking the control – and responsibility – away from the driver.

“I’m not really a fan of the cars you don’t have to drive,” she said. “As we get older, our reactions slow, but I think for the young ones, chatting on their phones or looking at the iPad, you just couldn’t react quickly if you needed to either. I think it’s an accident waiting to happen, whatever age you are.”

“And I enjoy driving – I think I’d miss that,” Wilkinson said. “I’ve driven since I first passed my test in my 20s, and I hope I can keep on doing so for a long time.

“I don’t think fully driverless cars will become the norm, but I do think the technology will take over more,” she said. “I think studies like this that help to make it as safe as possible are really important.”

Ian Fairclough, 77 from Gateshead, added: “When you’re older and the body starts to give up on you, a car means you can still have adventures and keep yourself active.”

“I passed my test at 22 and was in the army for 25 years, driving all sorts of vehicles in all terrains and climates,” he recalled. “Now I avoid bad weather, early mornings when the roads are busy and late at night when it’s dark, so it was really interesting to take part in this study and see how the technology is developing and what cars might be like a few years from now.”

Fairclough took part in two of the studies in the VR simulator and said it was difficult to switch your attention quickly from one task to another.

“It feels very strange to be a passenger one minute and the driver the next,” he said. “But I do like my Toyota Yaris. It’s simple, clear and practical.  I think perhaps you can have too many buttons.”

Wilkinson and Fairclough became involved in the project through VOICE, a group of volunteers working together with researchers and businesses to identify the needs of older people and develop solutions for a healthier, longer life.

The post Self-driving cars may not be best for older drivers, says Newcastle University study appeared first on The Robot Report.

KIST researchers teach robot to trap a ball without coding

KIST teaching

KIST’s research shows that robots can be intuitively taught to be flexible by humans rather than through numerical calculation or programming the robot’s movements. Credit: KIST

The Center for Intelligent & Interactive Robotics at the Korea Institute of Science and Technology, or KIST, said that a team led by Dr. Kee-hoon Kim has developed a way of teaching “impedance-controlled robots” through human demonstrations. It uses surface electromyograms of muscles and succeeded in teaching a robot to trap a dropped ball like a soccer player.

A surface electromyogram (sEMG) is an electric signal produced during muscle activation that can be picked up on the surface of the skin, said KIST, which is led by Pres. Byung-gwon Lee.

Recently developed impedance-controlled robots have opened up a new era of robotics based on the natural elasticity of human muscles and joints, which conventional rigid robots lack. Robots with flexible joints are expected to be able to run, jump hurdles and play sports like humans. However, the technology required to teach such robots to move in this manner has been unavailable until recently.

KIST uses human muscle signals to teach robots how to move

The KIST research team claimed to be the first in the world to develop a way of teaching new movements to impedance-controlled robots using human muscle signals. With this technology, which detects not only human movements but also muscle contractions through sEMG, it’s possible for robots to imitate movements based on human demonstrations.

Dr. Kee-hoon Kim’s team said it succeeded in using sEMG to teach a robot to quickly and adroitly trap a rapidly falling ball before it comes into contact with a solid surface or bounces too far to reach — similar to the skills employed by soccer players.

SEMG sensors were attached to a man’s arm, allowing him to simultaneously control the location and flexibility of the robot’s rapid upward and downward movements. The man then “taught” the robot how to trap a rapidly falling ball by giving a personal demonstration. After learning the movement, the robot was able to skillfully trap a dropped ball without any external assistance.

KIST movements

sEMG sensors attached to a man’s arm, allowed him to control the location and flexibility of a robot’s rapid movements. Source: KIST

This research outcome, which shows that robots can be intuitively taught to be flexible by humans, has attracted much attention, as it was not accomplished through numerical calculation or programming of the robot’s movements. This study is expected to help advance the study of interactions between humans and robots, bringing us one step closer to a world in which robots are an integral part of our daily lives.

Kim said, “The outcome of this research, which focuses on teaching human skills to robots, is an important achievement in the study of interactions between humans and robots.”

Elephant Robotics’ Catbot designed to be a smaller, easier to use cobot


Small and midsize enterprises are just beginning to benefit from collaborative robot arms or cobots, which are intended to be safer and easier to use than their industrial cousins. However, high costs and the difficulty of customization are still barriers to adoption. Elephant Robotics this week announced its Catbot, which it described as an “all in one safe robotic assistant.”

The cobot has six degrees of freedom, has a 600mm (23.6 in.) reach, and weighs 18kg (39.68 lb.). It has a payload capacity of 5kg (11 lb.). Elephant Robotics tested Catbot in accordance with international safety standards EN ISO 13848:2008 PL d and 10218-1: 2011-Clause 5.4.3 for human-machine interaction. A teach pendant and a power box are optional with Catbot.

Elephant Robotics CEO Joey Song studied in Australia. Upon returning home, he said, he “wanted to create a smaller in size robot that will be safe to operate and easy to program for any business owner with just a few keystrokes.”

Song founded Elephant Robotics in 2016 in Shenzhen, China, also known as “the Silicon Valley of Asia.” It joined the HAX incubator and received seed funding from Princeton, N.J.-based venture capital firm SOSV.

Song stated that he is committed in making human-robot collaboration accessible to any small business by eliminating the limitations of high price or requirements for highly skilled programming. Elephant Robotics also makes the Elephant and Panda series cobots for precise industrial automation.

Catbot includes voice controls

Repetitive tasks can lead to boredom, accidents, and poor productivity and quality, noted Elephant Robotics. Its cobots are intended to free human workers to be more creative. The company added that Catbot can save on costs and increase workloads.

Controlling robots, even collaborative robots, can be difficult. This is even harder for robots that need to be precise and safe. Elephant Robotics cited Facebook’s new PyRobot framework as an example of efforts to simplify robotic commands.

Catbot is built on an open platform so developers can share the skills they’ve developed, allowing others to use them or build on top of them.

Elephant Robotics claimed that it has made Catbot smarter and safer than other collaborative robots, offering “high efficiency and flexibility to various industries.” It includes force sensing and voice-command functions.

In addition, Catbot has an “all-in-one” design, cloud-based programming, and quick tool changing.

The catStore virtual shop offers a set of 20 basic skills. Elephant Robotics said that new skills could be developed for specific businesses, and they can be shared with other users on its open platform.

Elephant Robotics' Catbot designed to be a smaller, easier to use cobot

Catbot is designed to provide automated assistance to people in a variety of SMEs. Source: Elephant Robotics

Application areas

Elephant Robotics said its cobots are suitable for assembly, packaging, pick-and-place, and testing tasks, among others. Its arms work with a variety of end effectors. To increase its flexibility, the company said, Catbot is designed to be easy to program, from high-precision tasks to covering “hefty ground projects.”

According to Elephant Robotics, the Catbot can used for painting, photography, and giving massages. It could also be a personal barista or play with humans in a table game. In addition, Catbot could act as a helping hand in research workshops or as an automatic screwdriver, said the company.

Elephant Robotics’ site said it serves the agricultural and food, automotive, consumer electronics, educational and research, household device, and machining markets.

Catbot is available now for preorder, with deliveries set to start in August 2019. Contact Elephant Robotics for more information on price or tech specifications at sales@elephantrobotics.com.

TRI tackles manipulation research for reliable, robust human-assist robots

Wouldn’t it be amazing to have a robot in your home that could work with you to put away the groceries, fold the laundry, cook your dinner, do the dishes, and tidy up before the guests come over? For some of us, a robot assistant – a teammate – might only be a convenience.

But for others, including our growing population of older people, applications like this could be the difference between living at home or in an assisted care facility. Done right, we believe these robots will amplify and augment human capabilities, allowing us to enjoy longer, healthier lives.

Decades of prognostications about the future – largely driven by science fiction novels and popular entertainment – have encouraged public expectations that someday home robots will happen. Companies have been trying for years to deliver on such forecasts and figure out how to safely introduce ever more capable robots into the unstructured home environment.

Despite this age of tremendous technological progress, the robots we see in homes to date are primarily vacuum cleaners and toys. Most people don’t realize how far today’s best robots are from being able to do basic household tasks. When they see heavy use of robot arms in factories or impressive videos on YouTube showing what a robot can do, they might reasonably expect these robots could be used in the home now.

Bringing robots into the home

Why haven’t home robots materialized as quickly as some have come to expect? One big challenge is reliability. Consider:

  • If you had a robot that could load dishes into the dishwasher for you, what if it broke a dish once a week?
  • Or, what if your child brings home a “No. 1 DAD!” mug that she painted at the local art studio, and after dinner, the robot discards that mug into the trash because it didn’t recognize it as an actual mug?

A major barrier for bringing robots into the home are core unsolved problems in manipulation that prevent reliability. As I presented this week at the Robotics: Science and Systems conference, the Toyota Research Institute (TRI) is working on fundamental issues in robot manipulation to tackle these unsolved reliability challenges. We have been pursuing a unique combination of robotics capabilities focused on dexterous tasks in an unstructured environment.

Unlike the sterile, controlled and programmable environment of the factory, the home is a “wild west” – unstructured and diverse. We cannot expect lab tests to account for every different object that a robot will see in your home. This challenge is sometimes referred to as “open-world manipulation,” as a callout to “open-world” computer games.

Despite recent strides in artificial intelligence and machine learning, it is still very hard to engineer a system that can deal with the complexity of a home environment and guarantee that it will (almost) always work correctly.

TRI addresses the reliability gap

Above is a demonstration video showing how TRI is exploring the challenge of robustness that addresses the reliability gap. We are using a robot loading dishes in a dishwasher as an example task. Our goal is not to design a robot that loads the dishwasher, but rather we use this task as a means to develop the tools and algorithms that can in turn be applied in many different applications.

Our focus is not on hardware, which is why we are using a factory robot arm in this demonstration rather than designing one that would be more appropriate for the home kitchen.

The robot in our demonstration uses stereo cameras mounted around the sink and deep learning algorithms to perceive objects in the sink. There are many robots out there today that can pick up almost any object — random object clutter clearing has become a standard benchmark robotics challenge. In clutter clearing, the robot doesn’t require much understanding about an object — perceiving the basic geometry is enough.

For example, the algorithm doesn’t need to recognize if the object is a plush toy, a toothbrush, or a coffee mug. Given this, these systems are also relatively limited with what they can do with those objects; for the most part, they can only pick up the objects and drop them in another location only. In the robotics world, we sometimes refer to these robots as “pick and drop.”

Loading the dishwasher is actually significantly harder than what most roboticists are currently demonstrating, and it requires considerably more understanding about the objects. Not only does the robot have to recognize a mug or a plate or “clutter,” but it has to also understand the shape, position, and orientation of each object in order to place it accurately in the dishwasher.

TRI’s work in progress shows not only that this is possible, but that it can be done with robustness that allows the robot to continuously operate for hours without disruption.

Toyota Research Institute

Getting a grasp on household tasks

Our manipulation robot has a relatively simple hand — a two-fingered gripper. The hand can make relatively simple grasps on a mug, but its ability to pick up a plate is more subtle. Plates are large and may be stacked, so we have to execute a complex “contact-rich” maneuver that slides one gripper finger under and between plates in order to get a firm hold. This is a simple example of the type of dexterity that humans achieve easily, but that we rarely see in robust robotics applications.

Silverware can also be tricky — it is small and shiny, which makes it hard to see with a machine-learning camera. Plus, given that the robot hand is relatively large compared to the smaller sink, the robot occasionally needs to stop and nudge the silverware to the center of the sink in order to do the pick. Our system can also detect if an object is not a mug, plate or silverware and, labeling it as “clutter,” and move it to a “discard” bin.

Connecting all of these pieces is a sophisticated task planner, which is constantly deciding what task the robot should execute next. This task planner decides if it should pull out the bottom drawer of the dishwasher to load some plates, pull out the middle drawer for mugs, or pull out the top drawer for silverware.’

Like the other components, we have made it resilient — if the drawer gets suddenly closed when it was needed to be open, the robot will stop, put down the object on the counter top, and pull the drawer back out to try again. This response shows how different this capability is than a typical precision, repetitive factory robot, which are typically isolated from human contact and environmental randomness.

Related content:

Simulation key to success

The cornerstone of TRI’s approach is the use of simulation. Simulation gives us a principled way to engineer and test systems of this complexity with incredible task diversity and machine learning and artificial intelligence components. It allows us to understand what level of performance the robot will have in your home with your mugs, even though we haven’t been able to test in your kitchen during our development.

An exciting achievement is that we have made great strides in making simulation robust enough to handle the visual and mechanical complexity of this dishwasher loading task and on closing the “sim to real” gap. We are now able to design and test in simulation and have confidence that the results will transfer to the real robot. At long last, we have reached a point where we do nearly all of our development in simulation, which has traditionally not been the case for robotic manipulation research.

We can run many more tests in simulation and more diverse tests. We are constantly generating random scenarios that will test the individual components of the dish loading plus the end-to-end performance.

Let me give you a simple example of how this works. Consider the task of extracting a single mug from the sink.  We generate scenarios where we place the mug in all sorts of random configurations, testing to find “corner cases” — rare situations where our perception algorithms or grasping algorithms might fail. We can vary material properties and lighting conditions. We even have algorithms for generating random, but reasonable, shapes of the mug, generating everything from a small espresso cup to a portly cylindrical coffee mug.

We conduct simulation testing through the night, and every morning we receive a report that gives us new failure cases that we need to address.

Early on, those failures were relatively easy to find, and easy to fix. Sometimes they are failures of the simulator — something happened in the simulator that could never have happened in the real world — and sometimes they are problems in our perception or grasping algorithms. We have to fix all of these failures.

TRI robot

TRI is using an industrial robot for household tasks to test its algorithms. Source: TRI

As we continue down this road to robustness, the failures are getting more rare and more subtle. The algorithms that we use to find those failures also need to get more advanced. The search space is so huge, and the performance of the system so nuanced, that finding the corner cases efficiently becomes our core research challenge.

Although we are exploring this problem in the kitchen sink, the core ideas and algorithms are motivated by, and are applicable to, related problems such as verifying automated driving technologies.

‘Repairing’ algorithms

The next piece of our work focuses on the development of algorithms to automatically “repair” the perception algorithm or controller whenever we find a new failure case. Because we are using simulation, we can test our changes against not only this newly discovered scenario, but also make sure that our changes also work for all of the other scenarios that we’ve discovered in the preceding tests.

Of course, it’s not enough to fix this one test. We have to make sure we also do not break all of the other tests that passed before. It’s possible to imagine a not-so-distant future where this repair can happen directly in your kitchen, whereby if one robot fails to handle your mug correctly, then all robots around the world learn from that mistake.

We are committed to achieving dexterity and reliability in open-world manipulation. Loading a dishwasher is just one example in a series of experiments we will be using at TRI to focus on this problem.

It’s a long journey, but ultimately it will produce capabilities that will bring more advanced robots into the home. When this happens, we hope that older adults will have the help they need to age in place with dignity, working with a robotic helper that will amplify their capabilities, while allowing more independence, longer.

Editor’s note: This post by Dr. Russ Tedrake, vice president of robotics research at TRI and a professor at the Massachusetts Institute of Technology, is republished with permission from the Toyota Research Institute.