Wearable device could improve communication between humans, robots

An international team of scientists has developed an ultra-thin, wearable electronic device that facilitates smooth communication between humans and machines. The researchers said the new device is easy to manufacture and imperceptible when worn. It could be applied to human skin to capture various types of physical data for better health monitoring and early disease detection, or it could enable robots to perform specific tasks in response to physical cues from humans.

Wearable human-machine interfaces have had challenges — some are made from rigid electronic chips and sensors that are uncomfortable and restrict the body’s motion, while others consist of softer, more wearable elastic materials but suffer from slow response times.

While researchers have developed thin inorganic materials that wrinkle and bend, the challenge remains to develop wearable devices with multiple functions that enable smooth communication between humans and machines.

The team that wrote the paper included Kyoseung Sim, Zhoulyu Rao; Faheem Ershad; Jianming Lei, Anish Thukral, Jie Chen, and Cunjiang Yu at University of Houston. It also included Zhanan Zou and Jianling Xiao at University of Colorado, Boulder, and Qing-An Huang at Southeast University in Nanjing, China.

Wearable nanomembrane reads human muscle signals

Kyoseung Sim and company have designed a nanomembrane made from indium zinc oxide using a chemical processing approach that allows them to tune the material’s texture and surface properties. The resulting devices were only 3 to 4 micrometers thick, and snake-shaped, properties that allow them to stretch and remain unnoticed by the wearer.

When worn by humans, the devices could collect signals from muscle and use them to directly guide a robot, enabling the user to feel what the robot hand experienced. The devices maintain their function when human skin is stretched or compressed.

Wearable device could improve communication between humans, robots

Soft, unnoticeable, multifunctional, electronics-based, wearable human-machine interface devices. Credit: Cunjiang Yu

The researchers also found that sensors made from this nanomembrane material could be designed to monitor UV exposure (to mitigate skin disease risk) or to detect skin temperature (to provide early medical warnings), while still functioning well under strain.

Editor’s note: This month’s print issue of The Robot Report, which is distributed with Design World, focuses on exoskeletons. It will be available soon.

The post Wearable device could improve communication between humans, robots appeared first on The Robot Report.

Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems

SAN JOSE, Calif. — Velodyne Lidar Inc. today announced that it has acquired Mapper.ai’s mapping and localization software, as well as its intellectual property assets. Velodyne said that Mapper’s technology will enable it to accelerate development of the Vella software that establishes its directional view Velarray lidar sensor.

The Velarray is the first solid-state Velodyne lidar sensor that is embeddable and fits behind a windshield, said Velodyne, which described it as “an integral component for superior, more effective advanced driver assistance systems” (ADAS).

The company provides lidar sensors for autonomous vehicles and driver assistance. David Hall, Velodyne’s founder and CEO invented real-time surround-view lidar systems in 2005 as part of Velodyne Acoustics. His invention revolutionized perception and autonomy for automotive, new mobility, mapping, robotics, and security.

Velodyne said its high-performance product line includes a broad range of sensors, including the cost-effective Puck, the versatile Ultra Puck, and the autonomy-advancing Alpha Puck.

Mapper.ai staffers to join Velodyne

Mapper’s entire leadership and engineering teams will join Velodyne, bolstering the company’s large and growing software-development group. The talent from Mapper.ai will augment the current team of engineers working on Vella software, which will accelerate Velodyne’s production of ADAS systems.

Velodyne claimed its technology will allow customers to unlock advanced capabilities for ADAS features, including pedestrian and bicycle avoidance, Lane Keep Assistance (LKA), Automatic Emergency Braking (AEB), Adaptive Cruise Control (ACC), and Traffic Jam Assist (TJA).

“By adding Vella software to our broad portfolio of lidar technology, Velodyne is poised to revolutionize ADAS performance and safety,” stated Anand Gopalan, chief technology officer at Velodyne. “Expanding our team to develop Vella is a giant step towards achieving our goal of mass-producing an ADAS solution that dramatically improves roadway safety.”

“Mapper technology gives us access to some key algorithmic elements and accelerates our development timeline,” Gopalan added. “Together, our sensors and software will allow powerful lidar-based safety solutions to be available on every vehicle.”

Mapper.ai to contribute to Velodyne software

Mapper.ai developers will work on the Vella software for the Velarray sensor. Source: Velodyne Lidar

“Velodyne has both created the market for high-fidelity automotive lidar and established itself as the leader. We have been Velodyne customers for years and have already integrated their lidar sensors into easily deployable solutions for scalable high-definition mapping,” said Dr. Nikhil Naikal, founder and CEO of Mapper, who is joining Velodyne. “We are excited to use our technology to speed up Velodyne’s lidar-centric software approach to ADAS.”

In addition to ADAS, Velodyne said it will incorporate Mapper technology into lidar-centric solutions for other emerging applications, including autonomous vehicles, last-mile delivery services, security, smart cities, smart agriculture, robotics, and unmanned aerial vehicles.

The post Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems appeared first on The Robot Report.

LUKE prosthetic arm has sense of touch, can move in response to thoughts

Keven Walgamott had a good “feeling” about picking up the egg without crushing it. What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of LUKE, a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.

That’s because the team, led by University of Utah biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (named after the robotic hand that Luke Skywalker got in The Empire Strikes Back) to mimic the way a human hand feels objects by sending the appropriate signals to the brain.

Their findings were published in a new paper co-authored by University of Utah biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark, and other colleagues in the latest edition of the journal Science Robotics.

Sending the right messages

“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”

That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up, and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.

“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”

Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the University of Utah, was able to pluck grapes without crushing them, pick up an egg without cracking it, and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.

“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”

How those things are accomplished is through a complex series of mathematical calculations and modeling.

Kevin Walgamott LUKE arm

Kevin . Walgamott wears the LUKE prosthetic arm. Credit: University of Utah Center for Neural Interfaces

The LUKE Arm

The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.

Meanwhile, the University of Utah team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by University of Utah biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array.

The Array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.

But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.

First, the prosthetic arm has sensors in its hand that send signals to the nerves via the Array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.

“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.

To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.

Future research

In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.

Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.

Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.

The research involves a number of institutions including the University of Utah’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering, and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.

“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”

Editor’s note: Reposted from the University of Utah.

The post LUKE prosthetic arm has sense of touch, can move in response to thoughts appeared first on The Robot Report.

Neural Analytics partners with NGK Spark Plug to scale up medical robots

Neural Analytics partners with NGL Spark Plug to scale up medical robots

The Lucid Robotic System has received FDA clearance. Source: Neural Analytics

LOS ANGELES — Neural Analytics Inc., a medical robotics company developing and commercializing technologies to measure and track brain health, has announced a strategic partnership with NGK Spark Plug Co., a Japan-based company that specializes in comprehensive ceramics processing. Neural Analytics said the partnership will allow it to expand its manufacturing capabilities and global footprint.

Neural Analytics’ Lucid Robotic System (LRS) includes the Lucid M1 Transcranial Doppler Ultrasound System and NeuralBot system. The resulting autonomous robotic transcranial doppler (rTCD) platform is designed to non-invasively search, measure, and display objective brain blood-flow information in real time.

The Los Angeles-based company’s technology integrates ultrasound and robotics to empower clinicians with critical information about brain health to make clinical decisions. Through its algorithm, analytics, and autonomous robotics, Neural Analytics provides valuable information that can identify pathologies such as Patent Foramen Ovale (PFO), a form of right-to-left shunt.

Nagoya, Japan-based NGK Spark Plug claims to be the world’s leading manufacturer of spark plugs and automotive sensors, as well as a broad lineup of packaging, cutting tools, bio ceramics, and industrial ceramics. The company has more than 15,000 employees and develops products related to the environment, energy, next-generation vehicles, and the medical device and diagnostic industries.

Neural Analytics and NGK to provide high-quality parts, global access

“This strategic partnership between Neural Analytics and NGK Spark Plug is built on a shared vision for the future of global healthcare and a foundation of common values,” said Leo Petrossian, Ph.D., co-founder and CEO of Neural Analytics. “We are honored with this opportunity and look forward to learning from our new partners how they have built a great global enterprise,”

NGK Spark Plug has vast manufacturing expertise in ultra-high precision ceramics. With this partnership, both companies said they are committed in working together to build high-quality products at a reasonable cost to allow greater access to technologies like the Lucid Robotic System.

“I am very pleased with this strategic partnership with Neural Analytics,” said Toru Matsui, executive vice president of NGK Spark Plug. “This, combined with a shared vision, is an exciting opportunity for both companies. This alliance enables the acceleration of their great technology to the greater market.”

This follows Neural Analytics’ May announcement of its Series C round close, led by Alpha Edison. In total, the company has raised approximately $70 million in funding to date.

Neural Analytics said it remains “committed to advancing brain healthcare through transformative technology to empower clinicians with the critical information needed to make clinical decisions and improve patient outcomes.”

The post Neural Analytics partners with NGK Spark Plug to scale up medical robots appeared first on The Robot Report.

6 common mistakes when setting up safety laser scanners


Having worked in industrial automation for most of my career, I’d like to think that I’ve built up a wealth of experience in the field of industrial safety sensors. Familiar with safety laser scanners for over a decade, I have been involved many designs and installations.

I currently work for SICK (UK) Ltd., which invented the safety laser scanner, and I continually see people making the same mistakes time and time again. This short piece highlights, in my opinion, the most common of them.

1. Installation and mounting: Thinking about safety last

If you are going to remember just one point, then this is it. Too many times have I been present at an “almost finished” machine and asked, “Right, where can I stick this scanner?”

Inevitably, what ends up happening is that blind spots (shadows created by obstacles) become apparent all over the place. This requires mechanical “bodges” and maybe even additional scanners to cover the complete area when one scanner may have been sufficient if the cell was designed properly in the first place.

In safety, designing something out is by far the most cost-effective and robust solution. If you know you are going to be using a safety laser scanner, then design it in from the beginning — it could save you a world of pain. Consider blind zones, coverage and the location of hazards.

This also goes for automated guided vehicles (AGVs). For example, the most appropriate position to completely cover an AGV is to have two scanners adjacent to each other on the corners integrated into the vehicle (See Figure 1).

Figure 1: Typical AGV scanner mounting and integration. | Credit: SICK

2. Incorrect multiple sampling values configured

An often misunderstood concept, multiple sampling indicates how often an object has to be scanned in succession before a safety laser scanner reacts. By default and out of the box, this value is usually x2 scans, which is the minimum value. However, this value may range from manufacturer to manufacturer. A higher multiple sampling value reduces the possibility that insects, weld sparks, weather (for outdoor scanners) or other particles cause the machine to shut down.

Increasing the multiple sampling can make it possible to increase a machine’s availability, but it can also have negative effects on the application. Increasing the number of samples is basically adding an OFF-Delay to the system, meaning that your protective field may need to be bigger due to the increase in the total response time.

If a scanner has a robust detection algorithm, then you shouldn’t have to increase this value too much but when this value is changed you could be creating a hazard due to lack of effectiveness of the protective device.

If the value is changed, you should make a note of the safety laser scanner’s new response time and adjust the minimum distance from the hazardous point accordingly to ensure it remains safe.

Furthermore, in vertical applications, if the multiple sampling is set too high, then it may be possible for a person to pass through the protective field without being detected — so care must be taken. For one our latest safety laser scanners, the microScan3, we provide the following advice:

Figure 2: Recommended multiple sampling values. | Credit: SICK

3. Incorrect selection of safety laser scanner

The maximum protective field that a scanner can facilitate is an important feature, but this value alone should not be a deciding factor on whether the scanner is suitable for an application. A safety laser scanner is a Type 3 device, according to IEC 61496, and an Active Opto-Electric Protective Devices responsive to Diffuse Reflection (AOPDDR). This means that it depends on diffuse reflections off of objects. Therefore, to achieve longer ranges, scanners must be more sensitive. In reality, this means that sometimes scanning angle but certainly detection robustness can be sacrificed.

This could lead to a requirement for an increasing number multiple samples and maybe lack of angular resolution. The increased response times and lack of angle could mean that larger protective fields are required and even additional scanners — even though you bought the longer range one. A protective field should be as large as required but as small as possible.

A shorter-range scanner may be more robust than its longer-range big brother and, hence, keep the response time down, reduce the footprint, reduce cost and eliminate annoying false trips.

4. Incorrect resolution selected

The harmonized standard EN ISO 13855 can be used for the positioning of safeguards with respect to the approach speeds of the human body. Persons or parts of the body to be protected may not be recognized or recognized in time if the positioning or configuration is incorrect. The safety laser scanner should be mounted so that crawling beneath, climbing over and standing behind the protective fields is not possible.

If crawling under could create a hazardous situation, then the safety laser scanner should not be mounted any higher than 300 mm. At this height, a resolution of up 70 mm can be selected to ensure that it is possible to detect a human leg. However, it is sometimes not possible to mount the safety laser scanner at this height. If mounted below 300 mm, then a resolution of 50 mm should be used.

It is a very common mistake to mount the scanner lower than 300 mm and leave the resolution on 70mm. Reducing the resolution may also reduce the maximum protective field possible on a safety laser scanner so it is important to check.

5. Ambient/environmental conditions were not considered

Sometimes safety laser scanners just aren’t suitable in an application. Coming from someone who sells and supports these devices, that is a difficult thing to say. However, scanners are electro-sensitive protective equipment and infrared light can be a tricky thing to work with. Scanners have become very robust devices over the last decade with increasingly complex detection techniques (SafeHDDM by SICK) and there are even safety laser scanners certified to work outdoors (outdoorScan3 by SICK).

However, there is a big difference between safety and availability and expectations need to be realistic right from the beginning. A scanner might not maintain 100% machine availability if there is heavy dust, thick steam, excessive wood chippings, or even dandelions constantly in front of the field of view. Even though the scanner will continue to be safe and react to such situations, trips due to ambient conditions may not be acceptable to a user.

For extreme environments, the following question should be asked: “What happens when the scanner is not available due to extreme conditions?” This can be especially true in outdoor application in heavy rain, snow or fog. A full assessment of the ambient conditions and even potentially proof tests should be carried out. This particular issue can become a very difficult, and sometimes impossible, and expensive thing to fix.

6. Non-safe switching of field sets

A field set in a safety laser scanner can consist of multiple different field types. For example, a field set could consist of 4 safe protection fields (Field Set 1) or it could consist of 1 safe protective field, two non-safe warning fields and a safe detection field (Field set 2). See Figure 3.

Figure 3: Safety laser scanner field sets. | Credit: SICK

A scanner can store lots of different fields that can be selected using either hardwired inputs or safe networked inputs (CIP Safety, PROFISAFE, EFI Pro). This is a feature that industry finds very useful for both safety and productivity in Industry 4.0 applications.

However, the safety function (as per EN ISO 13849/EN 62061) for selecting the field set at any particular point in time should normally have the same safety robustness (PL/SIL) as the scanner itself. A safety laser scanner can be used in safety functions up to PLd/SIL2.

If we look at AGVs, for example, usually two rotary encoders are used to switch between fields achieving field switching up to PLe/SIL3. There are now also safety rated rotary encoders that can be used alone to achieve field switching to PLd/SIL2.

However, sometimes the safety of the mode selection is overlooked. For example, if a standard PLC or a single channel limit switch is used for selecting a field set, then this would reduce the PL/SIL of the whole system to possibly PLc or even PLa. An incorrect selection of field set could mean that an AGV is operating with small protective field in combination with a high speed and hence long stopping time, creating a hazardous situation.

Summary

Scanners are complex devices and have been around for a long time with lots of choice in the market with regards to range, connectivity, size and robustness. There are also a lot of variables to consider when designing a safety solution using scanners. If you are new to this technology then it is a good idea to contact the manufacturer for advice on the application of these devices.

Here at SICK we offer complimentary services to our customers such as consultancy, on-site engineering assistance, risk assessment, safety concept and safety verification of electrosensitive protective equipment (ESPEs). We are always happy to answer any questions. If you’d like to get in touch then please do not hesitate.

About the Author

Dr. Martin Kidman is a Functional Safety Engineer and Product Specialist, Machinery Safety at SICK (UK) Ltd. He received his Ph.D. at the University of Liverpool in 2010 and has been involved in industrial automation since 2006 working for various manufacturers of sensors.

Kidman has been at SICK since January 2013 as a product specialist for machinery safety providing services, support and consultancy for industrial safety applications. He is a certified FS Engineer (TUV Rheinland, #13017/16) and regularly delivers seminars and training courses covering functional safety topics. Kidman has also worked for a notified body testing to the Low Voltage Directive in the past.

The post 6 common mistakes when setting up safety laser scanners appeared first on The Robot Report.

Challenges of building haptic feedback for surgical robots


Minimally invasive surgery (MIS) is a modern technique that allows surgeons to perform operations through small incisions (usually 5-15 mm). Although it has numerous advantages over older surgical techniques, MIS can be more difficult to perform. Some inherent drawbacks are:

  • Limited motion due to straight laparoscopic instruments and fixation enforced by the small incision in the abdominal wall
  • Impaired vision, due the two-dimensional imaging
  • Usage of long instruments amplifies the effects of surgeon’s tremor
  • Poor ergonomics imposed to the surgeon
  • Loss of haptic feedback, which is distorted by friction forces on the instrument and reactionary forces from the abdominal wall.

Minimally Invasive Robotic Surgery (MIRS) offers solutions to either minimize or eliminate many of the pitfalls associated with traditional laparoscopic surgery. MIRS platforms such as Intuitive Surgical’s da Vinci, approved by the U.S. Food and Drug Administration in 2000, represent a historical milestone of surgical treatments. The ability to leverage laparoscopic surgery advantages while augmenting surgeons’ dexterity and visualization and eliminating the ergonomic discomfort of long surgeries, makes MIRS undoubtedly an essential technology for the patient, surgeons and hospitals.

However, despite all improvements brought by currently commercially available MIRS, haptic feedback is still a major limitation reported by robot-assisted surgeons. Because the interventionist no longer manipulates the instrument directly, the natural haptic feedback is eliminated. Haptics is a conjunction of both kinesthetic (form and shape of muscles, tissues and joints) as well as tactile (cutaneous texture and fine detail) perception and is a combination of many physical variables such as force, distributed pressure, temperature and vibration.

Direct benefits of sensing interaction forces at the surgical end-effector are:

  • Improved organic tissue characterization and manipulation
  • Assessment of anatomical structures
  • Reduction of sutures breakage
  • Overall increase on the feeling of assisted robotics surgery.

Haptic feedback also plays a fundamental role in shortening the learning curve for young surgeons in MIRS training. A tertiary benefit of accurate real-time direct force measurement is that the data collected from these sensors can be utilized to produce accurate tissue and organ models for surgical simulators used in MIS training. Futek Advanced Sensor Technology, an Irvine, Calif.-based sensor manufacturer, shared these tips on how to design and manufacture haptic sensors for surgical robotics platforms.

With a force, torque and pressure sensor enabling haptic feedback to the hands of the surgeon, robotic minimally invasive surgery can be performed with higher accuracy and dexterity while minimizing trauma to the patient. | Credit: Futek

Technical and economic challenges of haptic feedback

Adding to the inherent complexity of measuring haptics, engineers and neuroscientists also face important issues that require consideration prior to the sensor design and manufacturing stages. The location of the sensing element, which significantly influences the measurement consistency, presents MIRS designers with a dilemma: should they place the sensor outside the abdomen wall near the actuation mechanism driving the end-effector (a.k.a. Indirect Force Sensing), or inside the patient at the instrument tip, embedded on the end-effector (a.k.a. Direct Force Sensing).

The pros and cons of these two approaches are associated with measurement accuracy, size restrictions and sterilization and biocompatibility requirements. Table 1 compares these two force measurement methods.

In the MIRS applications, where very delicate instrument-tissue interaction forces need to give precise feedback to the surgeon, measurement accuracy is sine qua non, which makes intra-abdominal direct sensing the ideal option.

However, this novel approach not only brings the design and manufacturing challenges described in Table 1 but also demands higher reusability. Commercially available MIRS systems that are modular in design allow the laparoscopic instrument to be reutilized approximately 12 to 20 times. Adding the sensing element near to the end-effector invariably increases the cost of the instrument and demands further consideration during the design stage in order to enhance sensor reusability.

Appropriate electronic components, strain measurement method and electrical connections have to withstand additional autoclavable cycles as well as survive a high PH washing. Coping with these special design requirements invariably increases the unitary cost per sensor. However, extended lifespan and number of cycles consequently reduces the cost per cycle and brings financial affordability to direct measurement method.

Hermeticity of high precision sub-miniature load sensing elements is equally challenging to intra-abdominal direct force measurement. The conventional approach to sealing electronic components is the adoption of conformal coatings, which are extensively used in submersible devices. As much as this solution provides protection in low-pressure water submersion environments for consumer electronics, coating protection is not sufficiently airtight and is not suitable for high-reliability medical, reusable and sterilizable solutions.

Under extreme process controls, conformal coatings have shown to be marginal and provide upwards of 20 to 30 autoclave cycles. The autoclave sterilization process presents a harsher physicochemical environment using high pressure and high temperature saturated steam. Similar to helium leak detection technology, saturated steam particles are much smaller in size compared to water particles and are capable of penetrating and degrading the coating over time causing the device to fail in a hardly predictable manner.

An alternative and conventional approach to achieving hermeticity is to weld on a header interface to the sensor. Again, welding faces obstacles in miniaturized sensors due to its size constraints. All in all, a novel and robust approach is a monolithic sensor using custom formulated, Ct matched, chemically neutral, high temperature fused isolator technology used to feed electrical conductors through the walls of the hermetically sealed active sensing element. The fused isolator technology has shown reliability in the hundreds to thousands of autoclave cycles.


The Robot Report launched the Healthcare Robotics Engineering Forum (Dec. 9-10 in Santa Clara, Calif.). The conference and expo focuses on improving the design, development and manufacture of next-generation healthcare robots. The Healthcare Robotics Engineering Forum is currently accepting speaking proposals through July 26, 2019. To submit a proposal, fill out this form.


Other design considerations for haptic feedback

As aforementioned, miniaturization, biocompatibility, autoclavability and high reusability are some of the unique characteristics imposed to a haptic sensor by the surgical environment. In addition, it is imperative that designers also meet requirements that are inherent to any high-performance force measurement device.

Extraneous loads (or crosstalk) compensation, provides optimal resistance to off-axis loads to assure maximum operating life and minimize reading errors. Force and torque sensors are engineered to capture forces along the Cartesian axes, typically X, Y and Z. From these three orthogonal axes, one to six measurement channels derives three force channels (Fx, Fy and Fz) and three torque or moment channels (Mx, My and Mz). Theoretically, a load applied along one of the axes should not produce a measurement in any of the other channels, but this is not always the case. For a majority of force sensors, this undesired cross-channel interference will be between 1 and 5% and, considering that one channel can capture extraneous loads from five other channels, the total crosstalk could be as high as 5 to 25%.

In robotic surgery, the sensor must be designed to negate the extraneous or cross-talk loads, which include frictions between the end-effector instrument and trocar, reactionary forces from the abdominal wall and gravitational effect of mass along the instrument axis. In some occasions, miniaturized sensors are very limited in space and have to compensate side loads using alternate methods such as electronic or algorithmic compensation.

haptic sensorsCalibration of direct inline force sensor imposes restrictions as well. The calibration fixtures are optimized with SR buttons to direct load precisely through the sensor of the part. If the calibration assembly is not equipped with such arrangements, the final calibration might be affected by parallel load paths.

Thermal effect is also a major challenge in strain measurement. Temperature variations cause material expansion, gage factor coefficient variation and other undesirable effects on the measurement result. For this reason, temperature compensation is paramount to ensure accuracy and long-term stability even when exposed to severe ambient temperature oscillations.

The measures to counteract temperature effects on the readings are:

  • The use of high-quality, custom and self-compensated strain gages compatible with the thermal expansion coefficient of the sensing element material
  • Use of half or full Wheatstone bridge circuit configuration installed in both load directions (tension and compression) to correct for temperature drift
  • Fully internally temperature compensation of zero balance and output range without the necessity of external conditioning circuitry.

In some special cases, the use of custom strain gages with reduced solder connections helps reduce temperature impacts from solder joints. Usually, a regular force sensor with four individual strain gages has upwards of 16 solder joints, while custom strain elements can reduce this down to less than six. This design consideration improves reliability as the solder joint, as an opportunity for failure, is significantly reduced.

During the design phase, it is also imperative to consider such sensors to meet high reliability along with high-volume manufacturability, taking into consideration the equipment and processes that will be required should a device be designated for high-volume manufacturing. The automated, high-volume processes could be slightly or significantly different than the benchtop or prototype equipment used for producing lower volumes. The scalability must maintain focus on reducing failure points during the manufacturing process, along with failure points that could occur on the field.

Testing for medical applications is more related to the ability of a measurement device that can withstand a high number of cycles rather than resist to strenuous structural stress. In particular for medical sensors, the overload and fatigue testing must be performed in conjunction with the sterilization testing in an intercalated process with several cycles of fatigue and sterilization testing. The ability to survive hundreds of overload cycles while maintaining hermeticity translates into a failure-free, high- reliability sensor with lower MTBF and more competitive total cost of ownership.

haptic sensors

Credit: Futek

Product development challenges

Although understanding the inherent design challenges of the haptic autoclavable sensor is imperative, the sensor manufacturer must be equipped with a talented multidisciplinary engineering team, in-house manufacturing capabilities supported by fully developed quality processes and product/project management proficiency to handle the complex, resource-limited, and fast-paced new product development environment.

A multidisciplinary approach will result in a sensor element that meets the specifications in terms of nonlinearity, hysteresis, repeatability and cross-talk, as well as an electronic instrument that delivers analog and digital output, high sampling rate and bandwidth, high noise-free resolution and low power consumption, both equally necessary for a reliable turnkey haptics measurement solution.

Strategic control of all manufacturing processes (machining, lamination, wiring, calibration), allows manufacturers to engineer sensors with a design for manufacturability (DFM) mentality. This strategic control of manufacturing boils down to methodically selecting the bill of material, defining the testing plans, complying with standards and protocols and ultimately strategizing the manufacturing phase based on economic constraints.

The post Challenges of building haptic feedback for surgical robots appeared first on The Robot Report.

Electronic skin could give robots an exceptional sense of touch


electronic skin

The National University of Singapore developed the Asynchronous Coded Electronic Skin, an artificial nervous system that could give robots an exceptional sense of touch. | Credit: National University of Singapore.

Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by researchers at the National University of Singapore (NUS).

The new electronic skin system has ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.

The innovation, achieved by Assistant Professor Benjamin Tee and his team from NUS Materials Science and Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.

Faster than the human sensory nervous system

“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hopes of giving robots and prosthetic devices a better sense of touch.

Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, unlike the nerve bundles in the human skin, it is made up of a network of sensors connected via a single electrical conductor.. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.

Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology, N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”

Related: Challenges of building haptic feedback for surgical robots

ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contact between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.

The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.

The ACES developed by Asst. Professor Tee (left) and his team responds 1000 times faster than the human sensory nervous system. | Credit: National University of Singapore

Smart electronic skins for robots and prosthetics

ACES has a simple wiring system and remarkable responsiveness even with increasing numbers of sensors. These key characteristics will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.

<strong>Related:</strong> <a href=”https://www.therobotreport.com/university-of-texas-austin-patent-gives-robots-ultra-sensitive-skin/”>UT Austin Patent Gives Robots Ultra-Sensitive Skin</a>

“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.

For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.

Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.

Editor’s Note: This article was republished from the National University of Singapore.

The post Electronic skin could give robots an exceptional sense of touch appeared first on The Robot Report.

Augmenting SLAM with deep learning

Some elements of the Spatial AI real-time computation graph. Click image to enlarge. Credit: SLAMcore

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a robot’s location within it. SLAM is being gradually developed towards Spatial AI, the common sense spatial reasoning that will enable robots and other artificial devices to operate in general ways in their environments.

This will enable robots to not just localize and build geometric maps, but actually interact intelligently with scenes and objects.

Enabling semantic meaning

A key technology that is helping this progress is deep learning, which has enabled many recent breakthroughs in computer vision and other areas of AI. In the context of Spatial AI, deep learning has most obviously had a big impact on bringing semantic meaning to geometric maps of the world.

Convolutional neural networks (CNNs) trained to semantically segment images or volumes have been used in research systems to label geometric reconstructions in a dense, element-by-element manner. Networks like Mask-RCNN, which detect precise object instances in images, have been demonstrated in systems that reconstruct explicit maps of static or moving 3D objects.

Deep learning vs. estimation

In these approaches, the divide between deep learning methods for semantics and hand-designed estimation methods for geometrical estimation is clear. More remarkable, at least to those of us from an estimation background, has been the emergence of learning techniques that now offer promising solutions to geometrical estimation problems. Networks can be trained to predict robust frame-to-frame visual odometry; dense optical flow prediction; or depth prediction from a single image.

When compared to hand-designed methods for the same tasks, these methods are strong on robustness, since they will always make predictions that are similar to real scenarios present in their training data. But designed methods still often have advantages in flexibility in a range of unforeseen scenarios, and in final accuracy due to the use of precise iterative optimization.

The three levels of SLAM, according to SLAMcore. Credit: SLAMcore”

The role of modular design

It is clear that Spatial AI will make increasingly strong use of deep learning methods, but an excellent question is whether we will eventually deploy systems where a single deep network trained end to end implements the whole of Spatial AI.  While this is possible in principle, we believe that this is a very long-term path and that there is much more potential in the coming years to consider systems with modular combinations of designed and learned techniques.

There is an almost continuous sliding scale of possible ways to formulate such modular systems. The end-to-end learning approach is ‘pure’ in the sense that it makes minimum assumptions about the representation and computation that the system needs to complete its tasks. Deep learning is free to discover such representations as it sees fit. Every piece of design which goes into a module of the system or the ways in which modules are connected reduces that freedom. However, modular design can make the learning process tractable and flexible, and dramatically reduce the need for training data.

Building in the right assumptions

There are certain characteristics of the real world that Spatial AI systems must work in that seem so elementary that it is unnecessary to spend training capacity on learning them. These could include:

  • Basic geometry of 3D transformation as a camera sees the world from different views
  • Physics of how objects fall and interact
  • The simple fact that the natural world is made up of separable objects at all
  • Environments are made up of many objects in configurations with a typical range of variability over time which can be estimated and mapped.

By building these and other assumptions into modular estimation frameworks that still have significant deep learning capacity in the areas of both semantics and geometrical estimation, we believe that we can make rapid progress towards highly capable and adaptable Spatial AI systems. Modular systems have the further key advantage over purely learned methods that they can be inspected, debugged and controlled by their human users, which is key to the reliability and safety of products.

We still believe fundamentally in Spatial AI as a SLAM problem, and that a recognizable mapping capability will be the key to enabling robots and other intelligent devices to perform complicated, multi-stage tasks in their environments.

For those who want to read more about this area, please see my paper “FutureMapping: The Computational Structure of Spatial AI Systems.”

Andrew Davison, SLAMcore

About the Author

Professor Andrew Davison is a co-founder of SLAMcore, a London-based company that is on a mission to make spatial AI accessible to all. SLAMcore develops algorithms that help robots and drones understand where they are and what’s around them – in an affordable way.

Davison is Professor of Robot Vision at the Department of Computing, Imperial College London and leads Imperial’s Robot Vision Research Group has spent 20 years conducting pioneering research in visual SLAM, with a particular emphasis on methods that work in real-time with commodity cameras.

He has developed and collaborated on breakthrough SLAM systems including MonoSLAM and KinectFusion, and his research contributions have over 15,000 academic citations. He also has extensive experience of collaborating with industry on the application of SLAM methods to real products.

Hank robot from Cambridge Consultants offers sensitive grip to industrial challenges

Robotics developers have taken a variety of approaches to try to equal human dexterity. Cambridge Consultants today unveiled Hank, a robot with flexible robotic fingers inspired by the human hand. Hank uses a pioneering sensory system embedded in its pneumatic fingers, providing a sophisticated sense of touch and slip. It is intended to emulate the human ability to hold and grip delicate objects using just the right amount of pressure.

Cambridge Consultants stated that Hank could have valuable applications in agriculture and warehouse automation, where the ability to pick small, irregular, and delicate items has been a “grand challenge” for those industries.

Picking under pressure

While warehouse automation has taken great strides in the past decade, today’s robots cannot emulate human dexterity at the point of picking diverse individual items from larger containers, said Cambridge Consultants. E‑commerce giants are under pressure to deliver more quickly and at a cheaper price, but still require human operators for tasks that can be both difficult and tedious.

“The logistics industry relies heavily on human labor to perform warehouse picking and packing and has to deal with issues of staff retention and shortages,” said Bruce Ackman, logistics commercial lead at Cambridge Consultants. “Automation of this part of the logistics chain lags behind the large-scale automation seen elsewhere.”

By giving a robot additional human-like senses, it can feel and orient its grip around an object, applying just enough force, while being able to adjust or abandon if the object slips. Other robots with articulated arms used in warehouse automation tend to require complex grasping algorithms, costly sensing devices, and vision sensors to accurately position the end effector (fingers) and grasp an object.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Hank uses sensors for a soft touch

Hank uses soft robotic fingers controlled by airflows that can flex the finger and apply force. The fingers are controlled individually in response to the touch sensors. This means that the end effector does not require millimeter-accurate positioning to grasp an object. Like human fingers, they close until they “feel” the object, said Cambridge Consultants.

With the ability to locate an object, adjust overall system position and then to grasp that object, Hank can apply increased force if a slip is detected and generate instant awareness of a mishandled pick if the object is dropped.

Cambridge Consultants claimed that Hank moves a step beyond legacy approaches to this challenge, which tend to rely on pinchers and suction appendages to grasp items, limiting the number and type of objects they can pick and pack.

“Hank’s world-leading sensory system is a game changer for the logistics industry, making actions such as robotic bin picking and end-to-end automated order fulfillment possible,” said Ackman. “Adding a sense of touch and slip, generated by a single, low-cost sensor, means that Hank’s fingers could bring new efficiencies to giant distribution centers.”

Molded from silicone, Hank’s fingers are hollow and its novel sensors are embedded during molding, with an air chamber running up the center. The finger surface is flexible, food-safe, and cleanable. As a low-cost consumable, the fingers can simply be replaced if they become damaged or worn.

With offices in Cambridge in the U.K.; Boston, Mass.; and Singapore, Cambridge Consultants develops breakthrough products, creates and licenses intellectual property, and provides business and technology consulting services for clients worldwide. It is part of Altran, a global leader in engineering and research and development services. For more than 35 years, Altran has provided design expertise in the automotive, aerospace, defense, industrial, and electronics sectors, among others.

Techmetics introduces robot fleet to U.S. hotels and hospitals

Fleets of autonomous mobile robots have been growing in warehouses and the service industry. Singapore-based Techmetics has entered the U.S. market with ambitions to supply multiple markets, which it already does overseas.

The company last month launched two new lines of autonomous mobile robots. The Techi Butler is designed to serve hotel guests or hospital patients by interacting with them via a touchscreen or smartphone. It can deliver packages, room-service orders, and linens and towels.

The Techi Cart is intended to serve back-of-house services such as laundry rooms, kitchens, and housekeeping departments.

“Techmetics serves 10 different applications, including manufacturing, casinos, and small and midsize businesses,” said Mathan Muthupillai, founder and CEO of Techmetics. “We’re starting with just two in the U.S. — hospitality and healthcare.”

Building a base

Muthupillai founded Techmetics in Singapore in 2012. “We spent the first three years on research and development,” he told The Robot Report. “By the end of 2014, we started sending out solutions.”

“The R&D team didn’t just start with product development,” recalled Muthupillai. “We started with finding clients first, identified their pain points and expectations, and got feedback on what they needed.”

“A lot of other companies make a robotic base, but then they have to build a payload solution,” he said. “We started with a good robot base that we found and added our body, software layer, and interfaces. We didn’t want to build autonomous navigation from scratch.”

“Now, we’re just getting components — lasers, sensors, motors — and building everything ourselves,” he explained. “The navigation and flow-management software are created in-house. We’ve created our own proprietary software.”

“We have a range of products, all of which use 2-D SLAM [simultaneous localization and mapping], autonomous navigation, and many safety sensors,” Muthupillai added. “They come with three lasers — two vertical and one horizontal for path planning. We’re working on a 3-D-based navigation solution.”

“Our robots are based on ROS [the Robot Operating System],” said Muthupillai. “We’ve created a unique solution that comes with third-party interfaces.”

Techmetics offers multiple robot models for different industries.

Source: Techmetics

Techmetics payloads vary

The payload capacity of Techmetics’ robots depends on the application and accessories and ranges from 250 to 550 lb. (120 to 250 kg).

“The payload and software are based on the behavior patterns in an industry,” said Muthupillai. “In manufacturing or warehousing, people are used to working around robots, but in the service sector, there are new people all the time. The robot must respond to them — they may stay in its path or try to stop it.”

“When we started this company, there were few mobile robots for the manufacturing industry. They looked industrial and had relatively few safety features because they weren’t near people,” he said. “We changed the form factor for hospitality to be good-looking and safer.”

“When we talk with hotels about the Butler robots, they needed something that could go to multiple rooms,” Muthupillai explained. “Usually, staffers take two to three items in a single trip, so if a robot went to only one room and then returned, that would be a waste of time. Our robots have three compartment levels based on this feedback.”

Elevators posed a challenge for the Techi Butler and Techi Cart — not just for interoperability, but also for human-machine interaction, he said.

“Again, people working with robots didn’t share elevators with robots, but in hospitals and hotels, the robot needs to complete its job alongside people,” Muthupillai said. “After three years, we’re still modifying or adding functionalities, and the robots can take an elevator or go across to different buildings.”

“We’re not currently focusing on the supply chain industry, but we will license and launch the base into the market so that third parties can create their own solutions,” he said.

Techmetics' Techi Cart transports linens

Techi Cart transports linens and towels in a hotel or hospital. Source: Techmetics

Differentiators for Techi Butler and Cart

“We provide 10 robot models for four industries — no single company is a competitor for all our markets,” said Muthupillai. “We have three key differentiators.”

“First, customers can engage one vendor for multiple needs, and all of our robots can interact with one another,” he said. “Second, we talk with our clients and are always open to customization — for example, about compartment size — that other’s can’t do.”

“Third, we work across industries and can share our advantages across them,” Muthupillai claimed. “Since we already work with the healthcare industry, we already comply with safety and other regulations.”

“In hospitals or hotels, it’s not just about delivering a product from one point to another,” he said. “We’re adding camera and voice-recognition capabilities. If a robot sees a person who’s lost, it can help them.”

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Distribution and expansion

Techmetics’ mobile robots are manufactured in Thailand. According to Muthupillai, 80% of its robots are deployed in hotels and hospitals, and 20% are in manufacturing. The company already has distributors in Australia, Taiwan, and Thailand, and it is leveraging existing international clients for its expansion.

“We have many corporate clients in Singapore,” Muthupillai said. “The Las Vegas Sands Singapore has deployed 10 robots, and their headquarters in Las Vegas is considering deploying our products.”

“Also, U.K.-based Yotel has two hotels in Singapore, and its London branch is also interested,” he added. “The Miami Yotel is already using our robots, and soon they will be in San Francisco.”

Techmetics has three models for customers to choose from. The first is outright purchase, and the second is a two- or three-year lease. “The third model is innovative — they can try the robots from three to six months or one year and then buy,” Muthupillai said.

Muthupillai said he has moved to Techmetics’ branch office in the U.S. to manage its expansion. “We’ll be doing direct marketing in California, and we’re in the process of identifying partners, especially on the East Coast.”

“Only the theme, colors, or logos changed. No special modifications were necessary for the U.S. market,” he said. “We followed safety regulations overseas, but they were tied to U.S. regulations.”

“We will target the retail industry with a robot concierge, probably by the end of this year,” said Muthupillai. “We will eventually offer all 10 models in the U.S.”