Top 10 ROS-based robotics companies in 2019

Top 10 ROS-based robotics companies in 2019

Source: Ricardo Tellez

The Robot Operating System is becoming the standard in robotics, not only for robotics research, but also for robotics companies that build and sell robots. In this article, I offer a list of the top 10 robotics companies worldwide that base their robotics products on ROS.

Criteria

This is the list of criteria I followed to select the winners:

  • We are talking about robotics companies that build robots. This is not about companies that produce some kind of software based in ROS, but companies that create and ship robots based in ROS. We do not consider companies that do consulting and generate solutions for a third party, either.
  • They have created the robots themselves. This means they are not resellers or distributors of robots made by somebody else.
  • They have their robots natively running ROS. This means, you switch the robot on, and it is running ROS. We are not taking into account robots that support ROS — if you install the packages. We concentrate on robots that run ROS off the shelf. For example, you can run ROS on a UR5 arm robot, but if you buy the UR5 robot, it will not come with ROS support off the shelf. You need to add an extra layer of work. We are not considering those robots.
  • You can program the robots. Even if some companies provide ROS-based robots — such as Locus Robotics — they do not provide a way to program them. They provide the robots as a closed solution. We are not considering closed solutions here.

To summarize the criteria: 1. You can buy the robot directly from the company; 2. The robot runs ROS from Minute 1; and 3. You can program the robot at will.

Once the companies were selected based on the previous criteria, I had to decide the order. Order was based on my personal perception of the impact those companies are making in the ROS world. That is very subjective to my own experience, I know, but that is what it is. Whenever I felt it necessary, I described my motivation behind the position of the company on the list.

Now, having clarified all that, let’s go to the list!

Top 10 ROS companies

1. Clearpath Robotics

Clearpath is a Canadian company founded in 2009. The number of robots that it produces in the fields of unmanned ground vehicles, unmanned surface vehicles (on the water), and industrial vehicles is amazing. The company’s robots are based on ROS and can be programmed with ROS from Minute 1. That is why these robots are used in the creation of third-party applications for mining, survey, inspection, agriculture, and material handling.

Some of Clearpath’s best-known robots include Jackal UGV, which you can learn how to program. Others include the Husky UGV, Heron USV, and its recently launched series of Otto robots for industrial environments.

As a matter of trustability, this company took the responsibility to provide the customer support to the existing PR2 robots, once Willow Garage closed its doors. Because of that, and because it is the company with the most varied ROS robots available, I put it in the well-deserved No. 1 spot on this list.

I interviewed Ryan Gariepy, CTO of Clearpath, for the ROS Developers podcast. You can listen to the interview here.

2. Fetch Robotics

Fetch Robotics was founded by Melonee Wise in 2014, after she was forced to close her previous pioneer company, Unbounded Robotics. We can say that Fetch has two lines of business. First is its line of mobile manipulators, which are mainly used for robotics research.

Then, Fetch has a line of industrial robots which it sells in fleets ready to be deployed in a warehouse to help with the transport of materials. As I understand it, the first line of business is the only one that allows direct ROS programming, and the second one is a closed product.

I did not select Fetch for No. 2 because of its research line only. I selected it for this spot because Fetch was a pioneer in the creation of affordable mobile manipulators with its Fetch robot (paired with the Freight mobile platform). Up to the moment it released Fetch, there was no ROS-based mobile manipulator on the market. (Sorry, Turtlebot 2 with a Dynamixel arm doesn’t count as a mobile manipulator.)

Recently, Fetch organized the FetchIt! challenge at ICRA 2019. (My company, The Construct, was a partner contributing to the event’s simulation.) At that event, participants had to program their Fetch to produce some pieces in a manufacturing room. You can check the results here.

Even if Fetch Robotics only produces two robots meeting the criteria above, it was the pioneer that opened the field of ROS-based mobile manipulators. That is why it deserves the No. 2 spot on this list.

I interviewed Melonee Wise, CEO of Fetch Robotics, for the ROS Developers podcast. You can listen to the interview here.

3. Pal Robotics

Pal Robotics is based in Barcelona and was created in 2004. I especially love Pal because I worked there for more than seven years, and many of my friends are there. But love is not the reason I put them in the third position.

Pal Robotics earned No. 3 because it’s the only company in the world that builds and sells human-size humanoid robots. And not just a single type of humanoid, but three different types! The Reem robotReem-C robot, and recently, the TALOS robot.

Pal also produces mobile manipulators similar to the Fetch ones. They are called Tiago, and you can buy them for your research or applications on top. (If you’re interested, you can learn how to program Tiago robots with ROS in an online course that The Construct created in collaboration with Pal Robotics.)

We have recently released a simulation of TALOS, including its walking controllers. You can get it here.

I interviewed Luca Marchionni, CTO of Pal Robotics, for the ROS Developers podcast. You can listen to the interview here. Also, you can learn what is catkin_make and how to use it.

In addition, I interviewed Victor Lopez, main DevOps engineer of Pal Robotics, for the ROS Developers podcast. You can listen to that interview here.

4. Robotnik

Robotnik is another Spanish company, based in Castellon and founded in 2002. I call it “the Spanish Clearpath.” Really, it has built as many ROS robots as the first company on this list. Robotnik creates and designs mobile manipulators, unmanned ground vehicles of different types, and many types of mobile robots for industrial applications and logistics.

The company is also expert in personalizing your robot by integrating third-party robotics parts into a final ROS-based robot that meets your requirements.

Finally, Robotnik’s team includes the people behind the ROS Components online shop, where you can buy components for your robots that are certified to be ROS supported off the shelf. For all this extensive activity in selling ROS robots, Robotnik deserves the fourth position on this list.

A couple of months ago, Robotnik sent us one of its Summit XL robots for experimenting and creating ROS training materials. We used it extensively for our ROS Live Classes, showing how to program Robotinik robots using a cloud robotics platform.

We also created a specific course to train people to program their Summit XL robot.

I interviewed Roberto Martinez, CEO of Robotnik, for the ROS Developers podcast. You can listen to the interview here.

5. Yujin Robots

Yujin is a Korean company specializing in vacuum cleaning robots. However, those robots are not the reason they are on this list, since they do not run ROS onboard. Instead, Yujin is here because it’s the official seller of the Kobuki robot, that is, the base system of the Turtlebot 2 robot.

The Turtlebot 2 is the most famous ROS robot in the world, even more so than the PR2! Almost every one of us has learned with that robot, either in simulation or in reality. Due to its low cost, it allows you to easily enter into the ROS world.

If you have bought a Turtlebot 2 robot, it is very likely that the base was made by Yujin. We used Kobuki as the base of our robot Barista, and I use several of them at my ROS class at La Salle University.

Additionally, Yujin has developed another ROS robot for logistics that is called GoCart, a very interesting robot for logistics inside buildings (but not warehouses). The robot can be used to send packages from one location in the building to another — including elevators on the path.

6. Robotis

This is another Korean company that is making it big in the ROS world. Even if Robotis is well known for its Dynamixel servos, it’s best known in the ROS world because of its Turtlebot 3 robot and Open manipulator, both presented as the next generation of the Turtlebot series.

With the development of the Turtlebot 3, Robotis brought the Turtlebot concept to another level, allowing people easier entry into ROS. The manipulator is also very well integrated with the Turtlebot 3, so you can have a complete mobile manipulator for a few hundred dollars.

Even easier, the company has made all the designs of both robots open-source, so you can build the robots yourself. Here are the designs of Turtlebot 3. Here are the designs of Open Manipulator.

7. Shadow Robot

Shadow Robot is based in London. This company is a pioneer in the development of humanoid robotic hands. To my knowledge, Shadow Robot is the only company in the world that sells that kind of robotic hand.

Furthermore, its hands are ROS-programmable off the shelf. Apart from hands, Shadow Robot also produces many other types of grippers, which can be mounted on robotic arms to create complete grasping solutions.

One of its solutions combined with third-party robots was the Smart Grasping System released in 2016. It compbined a three-fingered gripper with a UR5 robot. Hhere is a simulation we created of the Smart Grasping System, in collaboration with Ugo Cupcic.

Shadow Robot’s products include the Shadow Hand, the Cyberglobe, and the Tactile telerobot.

Demonstrating its leadership in the field, Shadow Robot’s hands were selected by the OpenAI company to do their reinforcement learning experiments with robots that need to learn dexterity.

8. Husarion

Husarion is a Polish company founded in 2013. It sells simple and compact autonomous mobile robots called ROSbots. They are small, four-wheeled robots equipped with a lidar, camera, and a point cloud device. These robots are perfect for learning ROS with a real robot, or for doing research and learning with a more compact robot than the Turtlebot 2.

Husarion also produces the Panther robot, which is more oriented to outdoor environments, but with the same purpose of research and learning.

What makes Husarion different from other companies selling ROS robots is the compactness of its robots and its creation of the Husarnet network, which connects the robots through the cloud and has remote control over them.

I interviewed Dominik Novak, CEO of Husarion for the ROS Developers podcast. You can listen to the interview here.

9. Neobotix

Neobotix is a manufacturer of mobile robots and robot systems in general. It provides robots and manipulators for a wide range of industrial applications, especially in the sector of transporting material.

Neobotix is a spin-off of the Fraunhofer Institute in Stuttgart, and it created the famous Care-O-Bot, used many times in the Robocup@Home competitions. However, as far as I know, the Care-O-Bot never reached the point of product, even if you can order five of them and get them delivered, running immediately after unpacking.

At present, Neobotix is focusing on selling mobile bases, which can be customized with robotics arms, converting the whole system in a custom mobile manipulator.

The company also sells the mobile bases and the manipulators separately. Examples of mobile bases include Neobotix’s MP series of robots. On the mobile manipulator side, it sells the MM series. All of them work off-the-shelf with ROS.

Even if Neobotix’s products are full products on their own, I see them more as components that we can use for building more complex robots, allowing us to save time creating all the parts. That is why I have decided to put it in the ninth position and not above the other products.

10. Gaitech

Gaitech is a Chinese company that is mainly dedicated to distributing ROS robots, and ROS products in general, in China. from third-party companies. They include many of the companies on this list, including Fetch, Pal, and Robotnik.

However, Gaitech has also developed its own line of robots. For example, the Gapter drone is the only drone I’m aware of that works with ROS off the shelf.

Even if Gaitech’s robots are not very popular in the ROS circuit, I have included them it because at present, it’s the only company in the world that is building ROS–based drones. (Erle Robotics did ROS-based drones in the past, but as far as I know, that ceased when it switched to Acutronic Robotics.) Due to this lack of competition, I think Gaitech deserves the No. 10 position.

I interviewed May Zheng, VP of Marketing of Gaitech, for the ROS Developers podcast. You can listen to the interview here.

Honorable mentions

The following is a shortlist of other companies building ROS robots that did not make it onto the list for certain reasons. They may be here next year!

1. Sony

Sony is a complete newcomer to the world of ROS robots, but it has entered through the big door. Last year, it announced the release of the Aibo robot dog, which fully works on ROS. That was a big surprise to all of us, especially since Sony abandoned the Aibo project back in 2005.

Sony’s revived robot dog could have put it on the list above, except for the fact that the robot is still too new and can only be bought in the U.S. and Japan. Furthermore, the robot still has a very limited programming SDK, so you can barely program it.

If you are interested in the inner workings of Aibo with ROS, have a look at the presentation by Tomoya Fujita, one of the engineers of the project, during the ROS Developers Conference 2019, where he explained the communication mechanism between processes that they had to develop for ROS in order to reduce battery consumption in Aibo. Amazing stuff, fully compatible with ROS nodes and using the standard communication protocol!

2. Ubiquity Robotics

This is a company based on selling simple mobile bases based on ROS for the development of third-party solutions, or as it calls them, “robot applications.” Ubiquity Robotics’ goal is to provide a solid mobile base with off-the-shelf navigation on top of which you can build other solutions like telepresence, robotic waiters, and so on.

Ubiquity Robotics is a young company with a good idea in mind, but it’s very close to existing solutions like Neobotix or Robotnik. Let’s see next year how they have evolved.

I interviewed David Crawley, CEO of Ubiquity, for the ROS Developers podcast. You can listen to the interview here.

3. Acutronic Robotics

This company started building ROS-based drones, but recently, they changed direction to produce hardware ROS microchips. Acutronic produces the MARA robot, an industrial arm based on ROS2 on the H-ROS microchips.

However, as far as I know, the MARA robot is not Acutronics’ main business, since the company created it and sells it as an example of what can be done with H-ROS. That is why I decided not to include this company in the main top 10 list.

By the way, we also collaborated with Acutronic to create a series of videos about how to learn ROS2 using their MARA robot.

I interviewed Victor Mayoral, CEO of Acutronic, for the ROS Developers podcast. You can listen to the interview here.

ROS conclusions

Most of the ROS-based robotics companies are based on wheeled robots. A few exceptions are the humanoid robots of Pal Robotics, the drones of Gaitech, the robotic hands from Shadow Robots, and the robot arms from Neobotix.

It’s very interesting that we see almost no drones and no robotic arms running ROS off the shelf, since both of them are very basic types of robots. There are many robotic arm companies that provide ROS drivers for their robots and many packages for their control, like Universal Robots or Kinova.

But of the listed companies, only Neobotix actually provides an off-the-shelf arm robot with its MM series. I think there is a lot of market space for new ROS-based drones and robotic arms. Take note of that, entrepreneurs of the world!

Finally, I would like to acknowledge that I do not know all the ROS companies out there. Even if I have done my research to create this article, I may have missed some companies worth mentioning. Let me know if you know of or have a company that sells ROS robots and should be on this list, so I can update it and correct any mistakes.

Ricardo Tellez

About the author

Ricardo Tellez is co-founder and CEO of The Construct. Prior to this role, he was a postdoctoral researcher at the Robotics Institute of the Spanish Research Council. Tellez worked for more than seven years at Pal Robotics developing humanoid robots, including its navigation system and reasoning engine. He holds a Ph.D. in artificial intelligence and aims to create robots that really understand what they are doing. Tellez spoke at the 2019 Robotics Summit & Expo in Boston.

The post Top 10 ROS-based robotics companies in 2019 appeared first on The Robot Report.

6 common mistakes when setting up safety laser scanners


Having worked in industrial automation for most of my career, I’d like to think that I’ve built up a wealth of experience in the field of industrial safety sensors. Familiar with safety laser scanners for over a decade, I have been involved many designs and installations.

I currently work for SICK (UK) Ltd., which invented the safety laser scanner, and I continually see people making the same mistakes time and time again. This short piece highlights, in my opinion, the most common of them.

1. Installation and mounting: Thinking about safety last

If you are going to remember just one point, then this is it. Too many times have I been present at an “almost finished” machine and asked, “Right, where can I stick this scanner?”

Inevitably, what ends up happening is that blind spots (shadows created by obstacles) become apparent all over the place. This requires mechanical “bodges” and maybe even additional scanners to cover the complete area when one scanner may have been sufficient if the cell was designed properly in the first place.

In safety, designing something out is by far the most cost-effective and robust solution. If you know you are going to be using a safety laser scanner, then design it in from the beginning — it could save you a world of pain. Consider blind zones, coverage and the location of hazards.

This also goes for automated guided vehicles (AGVs). For example, the most appropriate position to completely cover an AGV is to have two scanners adjacent to each other on the corners integrated into the vehicle (See Figure 1).

Figure 1: Typical AGV scanner mounting and integration. | Credit: SICK

2. Incorrect multiple sampling values configured

An often misunderstood concept, multiple sampling indicates how often an object has to be scanned in succession before a safety laser scanner reacts. By default and out of the box, this value is usually x2 scans, which is the minimum value. However, this value may range from manufacturer to manufacturer. A higher multiple sampling value reduces the possibility that insects, weld sparks, weather (for outdoor scanners) or other particles cause the machine to shut down.

Increasing the multiple sampling can make it possible to increase a machine’s availability, but it can also have negative effects on the application. Increasing the number of samples is basically adding an OFF-Delay to the system, meaning that your protective field may need to be bigger due to the increase in the total response time.

If a scanner has a robust detection algorithm, then you shouldn’t have to increase this value too much but when this value is changed you could be creating a hazard due to lack of effectiveness of the protective device.

If the value is changed, you should make a note of the safety laser scanner’s new response time and adjust the minimum distance from the hazardous point accordingly to ensure it remains safe.

Furthermore, in vertical applications, if the multiple sampling is set too high, then it may be possible for a person to pass through the protective field without being detected — so care must be taken. For one our latest safety laser scanners, the microScan3, we provide the following advice:

Figure 2: Recommended multiple sampling values. | Credit: SICK

3. Incorrect selection of safety laser scanner

The maximum protective field that a scanner can facilitate is an important feature, but this value alone should not be a deciding factor on whether the scanner is suitable for an application. A safety laser scanner is a Type 3 device, according to IEC 61496, and an Active Opto-Electric Protective Devices responsive to Diffuse Reflection (AOPDDR). This means that it depends on diffuse reflections off of objects. Therefore, to achieve longer ranges, scanners must be more sensitive. In reality, this means that sometimes scanning angle but certainly detection robustness can be sacrificed.

This could lead to a requirement for an increasing number multiple samples and maybe lack of angular resolution. The increased response times and lack of angle could mean that larger protective fields are required and even additional scanners — even though you bought the longer range one. A protective field should be as large as required but as small as possible.

A shorter-range scanner may be more robust than its longer-range big brother and, hence, keep the response time down, reduce the footprint, reduce cost and eliminate annoying false trips.

4. Incorrect resolution selected

The harmonized standard EN ISO 13855 can be used for the positioning of safeguards with respect to the approach speeds of the human body. Persons or parts of the body to be protected may not be recognized or recognized in time if the positioning or configuration is incorrect. The safety laser scanner should be mounted so that crawling beneath, climbing over and standing behind the protective fields is not possible.

If crawling under could create a hazardous situation, then the safety laser scanner should not be mounted any higher than 300 mm. At this height, a resolution of up 70 mm can be selected to ensure that it is possible to detect a human leg. However, it is sometimes not possible to mount the safety laser scanner at this height. If mounted below 300 mm, then a resolution of 50 mm should be used.

It is a very common mistake to mount the scanner lower than 300 mm and leave the resolution on 70mm. Reducing the resolution may also reduce the maximum protective field possible on a safety laser scanner so it is important to check.

5. Ambient/environmental conditions were not considered

Sometimes safety laser scanners just aren’t suitable in an application. Coming from someone who sells and supports these devices, that is a difficult thing to say. However, scanners are electro-sensitive protective equipment and infrared light can be a tricky thing to work with. Scanners have become very robust devices over the last decade with increasingly complex detection techniques (SafeHDDM by SICK) and there are even safety laser scanners certified to work outdoors (outdoorScan3 by SICK).

However, there is a big difference between safety and availability and expectations need to be realistic right from the beginning. A scanner might not maintain 100% machine availability if there is heavy dust, thick steam, excessive wood chippings, or even dandelions constantly in front of the field of view. Even though the scanner will continue to be safe and react to such situations, trips due to ambient conditions may not be acceptable to a user.

For extreme environments, the following question should be asked: “What happens when the scanner is not available due to extreme conditions?” This can be especially true in outdoor application in heavy rain, snow or fog. A full assessment of the ambient conditions and even potentially proof tests should be carried out. This particular issue can become a very difficult, and sometimes impossible, and expensive thing to fix.

6. Non-safe switching of field sets

A field set in a safety laser scanner can consist of multiple different field types. For example, a field set could consist of 4 safe protection fields (Field Set 1) or it could consist of 1 safe protective field, two non-safe warning fields and a safe detection field (Field set 2). See Figure 3.

Figure 3: Safety laser scanner field sets. | Credit: SICK

A scanner can store lots of different fields that can be selected using either hardwired inputs or safe networked inputs (CIP Safety, PROFISAFE, EFI Pro). This is a feature that industry finds very useful for both safety and productivity in Industry 4.0 applications.

However, the safety function (as per EN ISO 13849/EN 62061) for selecting the field set at any particular point in time should normally have the same safety robustness (PL/SIL) as the scanner itself. A safety laser scanner can be used in safety functions up to PLd/SIL2.

If we look at AGVs, for example, usually two rotary encoders are used to switch between fields achieving field switching up to PLe/SIL3. There are now also safety rated rotary encoders that can be used alone to achieve field switching to PLd/SIL2.

However, sometimes the safety of the mode selection is overlooked. For example, if a standard PLC or a single channel limit switch is used for selecting a field set, then this would reduce the PL/SIL of the whole system to possibly PLc or even PLa. An incorrect selection of field set could mean that an AGV is operating with small protective field in combination with a high speed and hence long stopping time, creating a hazardous situation.

Summary

Scanners are complex devices and have been around for a long time with lots of choice in the market with regards to range, connectivity, size and robustness. There are also a lot of variables to consider when designing a safety solution using scanners. If you are new to this technology then it is a good idea to contact the manufacturer for advice on the application of these devices.

Here at SICK we offer complimentary services to our customers such as consultancy, on-site engineering assistance, risk assessment, safety concept and safety verification of electrosensitive protective equipment (ESPEs). We are always happy to answer any questions. If you’d like to get in touch then please do not hesitate.

About the Author

Dr. Martin Kidman is a Functional Safety Engineer and Product Specialist, Machinery Safety at SICK (UK) Ltd. He received his Ph.D. at the University of Liverpool in 2010 and has been involved in industrial automation since 2006 working for various manufacturers of sensors.

Kidman has been at SICK since January 2013 as a product specialist for machinery safety providing services, support and consultancy for industrial safety applications. He is a certified FS Engineer (TUV Rheinland, #13017/16) and regularly delivers seminars and training courses covering functional safety topics. Kidman has also worked for a notified body testing to the Low Voltage Directive in the past.

The post 6 common mistakes when setting up safety laser scanners appeared first on The Robot Report.

Robots can play key roles in repairing our infrastructure


Pipeline inspection robot

Pipeline inspection robot

I was on the phone recently with a large multinational corporate investor discussing the applications for robotics in the energy market. He expressed his frustration about the lack of products to inspect and repair active oil and gas pipelines, citing too many catastrophic accidents. His point was further endorsed by a Huffington Post article that reported in a twenty-year period such tragedies have led to 534 deaths, more than 2,400 injuries, and more than $7.5 billion in damages. The study concluded that an incident occurs every 30 hours across America’s vast transcontinental pipelines.

The global market for pipeline inspection robots is estimated to exceed $2 billion in the next six years, more than tripling today’s $600 million in sales. The Zion Market Research report states: “Robots are being used increasingly in various verticals in order to reduce human intervention from work environments that are dangerous … Pipeline networks are laid down for the transportation of oil and gas, drinking waters, etc. These pipelines face the problem of corrosion, aging, cracks, and various another type of damages…. As the demand for oil and gas is increasing across the globe, it is expected that the pipeline network will increase in length in the near future thereby increasing the popularity of the in-pipe inspection robots market.”

Industry consolidation plays key role

Another big indicator of this burgeoning industry is growth of consolidation. Starting in December 2017, Pure Technologies was purchased by New York-based Xylem for more than $500 million. Xylem was already a leader in smart technology solutions for water and waster management pump facilities. Its acquisition of Pure enabled the industrial company to expand its footprint into the oil and gas market. Utilizing Pure’s digital inspection expertise with mechatronics, the combined companies are able to take a leading position in pipeline diagnostics.

Patrick Decker, Xylem president and chief executive, explained, “Pure’s solutions strongly complement the broader Xylem portfolio, particularly our recently acquired Visenti and Sensus solutions, creating a unique and disruptive platform of diagnostic, analytics and optimization solutions for clean and wastewater networks. Pure will also bring greater scale to our growing data analytics and software-as-a-service capabilities.”

According to estimates at the time of the merger, almost 25% of Pure’s business was in the oil and gas industry. Today, Pure offers a suite of products for above ground and inline inspections, as well as data management software. In addition to selling its machines, sensors and analytics to the energy sector, it has successfully deployed units in thousands of waterways globally.

This past February, Eddyfi (a leading provider of testing equipment) acquired Inuktun, a robot manufacturer of semi-autonomous crawling systems. This was the sixth acquisition by fast growing Eddyfi in less than three years. As Martin Thériault, Eddyfi’s CEO, elaborates: “We are making a significant bet that the combination of Inuktun robots with our sensors and instruments will meet the increasing needs from asset owners. Customers can now select from a range of standard Inuktun crawlers, cameras and controllers to create their own off-the-shelf, yet customized, solutions.”

Colin Dobell, president of Inuktun, echoed Thériault sentiments, “This transaction links us with one of the best! Our systems and technology are suitable to many of Eddyfi Technologies’ current customers and the combination of the two companies will strengthen our position as an industry leader and allow us to offer truly unique solutions by combining some of the industry’s best NDT [Non Destructive testing[ products with our mobile robotic solutions. The future opportunities are seemingly endless. It’s very exciting.” In addition to Xylem and Eddyfi, other entrees into this space, include: CUES, Envirosight, GE Inspection Robotics, IBAK Helmut Hunger, Medit (Fiberscope), RedZone Robotics, MISTRAS Group, RIEZLER Inspektions Systeme, and Honeybee Robotics.

Repairing lines with micro-robots

While most of the current technologies focus on inspection, the bigger opportunity could be in actively repairing pipelines with micro-bots. Last year, the government of the United Kingdom began a $35 million study with six universities to develop mechanical insect-like robots to automatically fix its large underground network. According to the government’s press release, the goal is to develop robots of one centimeter in size that will crawl, swim and quite possibly fly through water, gas and sewage pipes. The government estimates that underground infrastructure accounts for $6 billion annually in labor and business disruption costs.

One of the institutions charged with this endeavor is the University of Sheffield’s Department of Mechanical Engineering led by Professor Kirill Horoshenkov. Dr. Horoshenkov boasts that his mission is more than commercial as “Maintaining a safe and secure water and energy supply is fundamental for society but faces many challenges such as increased customer demand and climate change.”

Horoshenkov, a leader in acoustical technology, expands further on the research objectives of his team, “Our new research programme will help utility companies monitor hidden pipe infrastructure and solve problems quickly and efficiently when they arise. This will mean less disruption for traffic and general public. This innovation will be the first of its kind to deploy swarms of miniaturised robots in buried pipes together with other emerging in-pipe sensor, navigation and communication solutions with long-term autonomy.”

England is becoming a hotbed for robotic insects; last summer Rolls-Royce shared with reporters its efforts in developing mechanical bugs to repair airplane engines. The engineers at the British aerospace giant were inspired by the research of Harvard professor Robert Wood with its ambulatory microrobot for search and rescue missions. James Kell of Rolls-Royce proclaims this is could be a game changer, “They could go off scuttling around reaching all different parts of the combustion chamber. If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

Currently the Harvard robot is too large to buzz through jet engines, but Rolls-Royce is not waiting for the Boston’s scientist as it has established with the University of Nottingham a Centre for Manufacturing and On-Wing Technologies “to design and build a range of bespoke prototype robots capable of performing jet engine repairs remotely.” The project lead Dragos Axinte is optimistic about the spillover effect of this work into the energy market, “The emergence of robots capable of replicating human interventions on industrial equipment can be coupled with remote control strategies to reduce the response time from several days to a few hours. As well as with any Rolls-Royce engine, our robots could one day be used in other industries such as oil, gas and nuclear.”

TRI tackles manipulation research for reliable, robust human-assist robots

Wouldn’t it be amazing to have a robot in your home that could work with you to put away the groceries, fold the laundry, cook your dinner, do the dishes, and tidy up before the guests come over? For some of us, a robot assistant – a teammate – might only be a convenience.

But for others, including our growing population of older people, applications like this could be the difference between living at home or in an assisted care facility. Done right, we believe these robots will amplify and augment human capabilities, allowing us to enjoy longer, healthier lives.

Decades of prognostications about the future – largely driven by science fiction novels and popular entertainment – have encouraged public expectations that someday home robots will happen. Companies have been trying for years to deliver on such forecasts and figure out how to safely introduce ever more capable robots into the unstructured home environment.

Despite this age of tremendous technological progress, the robots we see in homes to date are primarily vacuum cleaners and toys. Most people don’t realize how far today’s best robots are from being able to do basic household tasks. When they see heavy use of robot arms in factories or impressive videos on YouTube showing what a robot can do, they might reasonably expect these robots could be used in the home now.

Bringing robots into the home

Why haven’t home robots materialized as quickly as some have come to expect? One big challenge is reliability. Consider:

  • If you had a robot that could load dishes into the dishwasher for you, what if it broke a dish once a week?
  • Or, what if your child brings home a “No. 1 DAD!” mug that she painted at the local art studio, and after dinner, the robot discards that mug into the trash because it didn’t recognize it as an actual mug?

A major barrier for bringing robots into the home are core unsolved problems in manipulation that prevent reliability. As I presented this week at the Robotics: Science and Systems conference, the Toyota Research Institute (TRI) is working on fundamental issues in robot manipulation to tackle these unsolved reliability challenges. We have been pursuing a unique combination of robotics capabilities focused on dexterous tasks in an unstructured environment.

Unlike the sterile, controlled and programmable environment of the factory, the home is a “wild west” – unstructured and diverse. We cannot expect lab tests to account for every different object that a robot will see in your home. This challenge is sometimes referred to as “open-world manipulation,” as a callout to “open-world” computer games.

Despite recent strides in artificial intelligence and machine learning, it is still very hard to engineer a system that can deal with the complexity of a home environment and guarantee that it will (almost) always work correctly.

TRI addresses the reliability gap

Above is a demonstration video showing how TRI is exploring the challenge of robustness that addresses the reliability gap. We are using a robot loading dishes in a dishwasher as an example task. Our goal is not to design a robot that loads the dishwasher, but rather we use this task as a means to develop the tools and algorithms that can in turn be applied in many different applications.

Our focus is not on hardware, which is why we are using a factory robot arm in this demonstration rather than designing one that would be more appropriate for the home kitchen.

The robot in our demonstration uses stereo cameras mounted around the sink and deep learning algorithms to perceive objects in the sink. There are many robots out there today that can pick up almost any object — random object clutter clearing has become a standard benchmark robotics challenge. In clutter clearing, the robot doesn’t require much understanding about an object — perceiving the basic geometry is enough.

For example, the algorithm doesn’t need to recognize if the object is a plush toy, a toothbrush, or a coffee mug. Given this, these systems are also relatively limited with what they can do with those objects; for the most part, they can only pick up the objects and drop them in another location only. In the robotics world, we sometimes refer to these robots as “pick and drop.”

Loading the dishwasher is actually significantly harder than what most roboticists are currently demonstrating, and it requires considerably more understanding about the objects. Not only does the robot have to recognize a mug or a plate or “clutter,” but it has to also understand the shape, position, and orientation of each object in order to place it accurately in the dishwasher.

TRI’s work in progress shows not only that this is possible, but that it can be done with robustness that allows the robot to continuously operate for hours without disruption.

Toyota Research Institute

Getting a grasp on household tasks

Our manipulation robot has a relatively simple hand — a two-fingered gripper. The hand can make relatively simple grasps on a mug, but its ability to pick up a plate is more subtle. Plates are large and may be stacked, so we have to execute a complex “contact-rich” maneuver that slides one gripper finger under and between plates in order to get a firm hold. This is a simple example of the type of dexterity that humans achieve easily, but that we rarely see in robust robotics applications.

Silverware can also be tricky — it is small and shiny, which makes it hard to see with a machine-learning camera. Plus, given that the robot hand is relatively large compared to the smaller sink, the robot occasionally needs to stop and nudge the silverware to the center of the sink in order to do the pick. Our system can also detect if an object is not a mug, plate or silverware and, labeling it as “clutter,” and move it to a “discard” bin.

Connecting all of these pieces is a sophisticated task planner, which is constantly deciding what task the robot should execute next. This task planner decides if it should pull out the bottom drawer of the dishwasher to load some plates, pull out the middle drawer for mugs, or pull out the top drawer for silverware.’

Like the other components, we have made it resilient — if the drawer gets suddenly closed when it was needed to be open, the robot will stop, put down the object on the counter top, and pull the drawer back out to try again. This response shows how different this capability is than a typical precision, repetitive factory robot, which are typically isolated from human contact and environmental randomness.

Related content:

Simulation key to success

The cornerstone of TRI’s approach is the use of simulation. Simulation gives us a principled way to engineer and test systems of this complexity with incredible task diversity and machine learning and artificial intelligence components. It allows us to understand what level of performance the robot will have in your home with your mugs, even though we haven’t been able to test in your kitchen during our development.

An exciting achievement is that we have made great strides in making simulation robust enough to handle the visual and mechanical complexity of this dishwasher loading task and on closing the “sim to real” gap. We are now able to design and test in simulation and have confidence that the results will transfer to the real robot. At long last, we have reached a point where we do nearly all of our development in simulation, which has traditionally not been the case for robotic manipulation research.

We can run many more tests in simulation and more diverse tests. We are constantly generating random scenarios that will test the individual components of the dish loading plus the end-to-end performance.

Let me give you a simple example of how this works. Consider the task of extracting a single mug from the sink.  We generate scenarios where we place the mug in all sorts of random configurations, testing to find “corner cases” — rare situations where our perception algorithms or grasping algorithms might fail. We can vary material properties and lighting conditions. We even have algorithms for generating random, but reasonable, shapes of the mug, generating everything from a small espresso cup to a portly cylindrical coffee mug.

We conduct simulation testing through the night, and every morning we receive a report that gives us new failure cases that we need to address.

Early on, those failures were relatively easy to find, and easy to fix. Sometimes they are failures of the simulator — something happened in the simulator that could never have happened in the real world — and sometimes they are problems in our perception or grasping algorithms. We have to fix all of these failures.

TRI robot

TRI is using an industrial robot for household tasks to test its algorithms. Source: TRI

As we continue down this road to robustness, the failures are getting more rare and more subtle. The algorithms that we use to find those failures also need to get more advanced. The search space is so huge, and the performance of the system so nuanced, that finding the corner cases efficiently becomes our core research challenge.

Although we are exploring this problem in the kitchen sink, the core ideas and algorithms are motivated by, and are applicable to, related problems such as verifying automated driving technologies.

‘Repairing’ algorithms

The next piece of our work focuses on the development of algorithms to automatically “repair” the perception algorithm or controller whenever we find a new failure case. Because we are using simulation, we can test our changes against not only this newly discovered scenario, but also make sure that our changes also work for all of the other scenarios that we’ve discovered in the preceding tests.

Of course, it’s not enough to fix this one test. We have to make sure we also do not break all of the other tests that passed before. It’s possible to imagine a not-so-distant future where this repair can happen directly in your kitchen, whereby if one robot fails to handle your mug correctly, then all robots around the world learn from that mistake.

We are committed to achieving dexterity and reliability in open-world manipulation. Loading a dishwasher is just one example in a series of experiments we will be using at TRI to focus on this problem.

It’s a long journey, but ultimately it will produce capabilities that will bring more advanced robots into the home. When this happens, we hope that older adults will have the help they need to age in place with dignity, working with a robotic helper that will amplify their capabilities, while allowing more independence, longer.

Editor’s note: This post by Dr. Russ Tedrake, vice president of robotics research at TRI and a professor at the Massachusetts Institute of Technology, is republished with permission from the Toyota Research Institute.

‘Boaty McBoatface’ shows promising future of AUVs


The biggest mystery in the universe could possibly be right here on Earth. According to the National Oceanic and Atmospheric Administration (NOAA), as much as 95% of the oceans and 99% of the ocean floor has yet to be explored. Given more than 70% of the planet is covered by water, the promise for unmanned systems to go deeper into the depths of the sea could be one of the ripest opportunities for autonomy. Besides the benefits for conservationism, commercial missions are estimated to drive billions of dollars of new revenues. Already the demand for such hardware systems accounts for more than $2 billion, which many project will climb to more than $6 billion by 2025.

Today’s underwater drone market is in its infancy with most sensor-packed, torpedo-like devices being tugged around the globe on the decks of ships. These products break down into two main categories:

  • Remote Operated Vehicles (ROV)
  • Autonomous Underwater Vehicles (AUV)

As an example of the emerging possibilities for AUVs, earlier this month the British government-backed project, Boaty McBoatface, traversed more than 112 miles autonomously at depths of 4,000 meters to shed new light on climate change and rising sea levels.

In the words of Dr. Eleanor Frajka-Williams of the National Oceanography Centre in Southampton, England, “the data from Boaty McBoatface gave us a completely new way of looking at the deep ocean – the path taken by Boaty created a spatial view of the turbulence near the seafloor.” Frajka-Williams anticipates that the information will help scientists predict the impact of global warming.

Dr. Povl Abrahamsen of the British Antarctic Survey in Cambridge, England echoed this view, “This study is a great example of how exciting new technology such as the unmanned submarine ‘Boaty McBoatface’ can be used along with ship-based measurements and cutting-edge ocean models to discover and explain previously unknown processes affecting heat transport within the ocean.” The future plans for Boaty include diving underneath Arctic ice and into subsea volcanos.

Boaty operates in a crowded space of close to fifty for-profit companies competing for marketshare. The activities of both large multinational corporations and upstart technology providers range from applications for defense to commercial exploration to scientific research. One of the largest purveyors is BlueFin Robotics, which was purchased by General Dynamics in 2016. Since then, there have been a number of high profile aquatic acquisitions, including: Riptide Autonomous Solutions by BAE Systems; Liquid Robotics by Boeing; and multiple investments in Ocean Aero by Lockheed Martin. The biggest driver of this consolidation is the demand from the military, particularly the Navy, for autonomous searching out and destroy missions.

In September 2017 the US Navy established the Unmanned Undersea Vehicle Squadron 1 (UUVRON-1). When explaining this move, Captain Robert Gaucher stated “Standing up UUVRON 1 shows our Navy’s commitment to the future of unmanned systems and undersea combat.” This sentiment was shared by Commander Corey Barker, spokesman of the famed Submarine Force Pacific, “In addition to providing a rapid, potentially lower cost solution to a variety of mission sets, UUVs can mitigate operations that pose increased risk to manned platforms.”

Last summer the Navy appointed a dedicated Commander of UUVRON-1, Scott Smith. In a recent interview, Smith opined his vision for sea drones, “Those missions that are too dangerous to put men on, or those missions that are too mundane and routine, but important ― like monitoring ― we’ll use them for those missions, as well. I don’t think we’ll ever replace the manned platform, but we’ll certainly augment them to a large degree.” It is this augmentation that is generating millions of dollars of defense contracts which are starting to spill over to private industry.

Boston-based Dive Technologies, founded by a team of former BlueFin engineers, is building an innovative technology to broaden the use of unmanned marine systems. In speaking with its CEO this week, Jerry Sgobbo, he described nascent opportunities for his suite of innovations: “We see demand for offshore survey work in the U.S. increasing significantly as grid scale offshore wind farms are developed over the next decade. In particular, much of this work will take place in New England and mid-Atlantic waters.”

Sgobbo is referring to the recent move by Rhode Island in constructing the first ever wind farm in the United States, capitalizing on the regions famous gale-force gusts. Based upon the success of the Block Island project, other states are quickly putting forth legislation to follow suit. Just this week, Senator Edward Markey of Massachusetts declared in Congress that “offshore wind has the potential to change the game on climate change, and those winds of change are blowing off the shores of Massachusetts. Offshore wind projects are a crucial part of America’s clean energy future, creating tens of thousands of jobs up and down the East Coast and reducing carbon pollution. In order to harness this potential, we need to provide this burgeoning industry the long-term certainty in the tax code that it needs.”

Sgobbo believes that such moves will spark greater investment in automation to support the harnessing of renewal energy. Dive’s value proposition is collecting imaging that enables wind farm builders to better map the ocean floor for their large structures. As the founder states, “For commercial customers, this data is necessary to support deepwater energy infrastructure projects. For defense customers, the same imaging approach is used to locate sea mines.”

Dive’s flexible platform readily lends itself to the development of offshore wind turbines. Sgobbo further explained, “Dive’s AUV is a large platform with very long range and is intended to operate independently without the need for the infrastructure that traditionally supports an AUV mission today. This allows a survey operator to reduce cost as well as perform survey work at times of the year when it is impractical to use a towed system or smaller AUV.”

The startup leveraged its extensive industry knowledge to reinvent how marine drones are utilized. “When we started Dive Technologies, my co-founders and I first took an in-depth look at how medium and large sized AUVs are being operated and manufactured across the industry today and we saw vast potential for innovation and improvement,” recalled Sgobbo. “Our new AUV platform, the ‘DIVE-LD,’ addresses the industry’s needs by drastically increasing payload capacity and on-board energy storage but, most importantly, driving down the cost to collect offshore data. We do this by offering quickly configurable payload space to accommodate specific sensors needed for a job or mission, and then letting our robot do what robots are meant to do, operate autonomously and with minimal human intervention.”

This means that Dive’s ability to tailor its product to specific mission requirements, along with greater battery capacity, enables it to take travel farther and deeper than its competitors. “Today’s offshore AUV missions are typically conducted with a dozen humans in an expensive surface support vessel which leads to important survey work being prohibitively expensive. Dive’s novel engineering solution will categorically shift this paradigm,” expounds Sgobbo.

As the growth of marine robotics begins to proliferate across the globe, how businesses utilize the technology will expand into new categories. Sgobbo predicts, “Often, the military and commercial missions have used very similar AUV technology, but are looking for different things in the ocean. Looking forward, both customers are interested in longer range AUVs. For commercial customers, the goal is to reduce operating costs. For defense, a low cost, long range AUV opens new mission sets beyond mine countermeasure and will further lend to keeping sailors safe from dull, dirty, and dangerous missions. Also, AUVs are increasingly important data collection tools for the scientific community.”

As we closed our discussion, he optimistically quipped, “With approximately 90% of the world’s trade carried across these marine highways, we see the U.S. Navy investing heavily in next generation AUV technologies to maintain a forward presence and keep shipping lanes secure. As a team, we also look forward to the opportunities we’ll discover in the unknown.”

Fears of job-stealing robots are misplaced, say experts

Fears of job-stealing robots are misplaced, say experts

Artificial intelligence will shift jobs, not replace them. | Reuters/Issei Kato

Some good news: The robots aren’t coming for your job. Experts at the Conference on the Future of Work at Stanford University last month said that fears that rapid advances in artificial intelligence, machine learning, and automation will leave all of us unemployed are vastly overstated.

But concerns over growing inequality and the lack of opportunity for many in the labor force — serious matters linked to a variety of structural changes in the economy — are well-founded and need to be addressed, four scholars on artificial intelligence and the economy told an audience at Stanford Graduate School of Business (GSB).

That’s not to say that AI isn’t having a profound effect on many areas of the economy. It is, of course. But understanding the link between the two trends is difficult, and it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist, during the forum, which was sponsored by the Stanford Institute for Human-Centered Artificial Intelligence.

Today’s workforce is sharply divided by levels of education, and those who have not gone beyond high school are affected the most by long-term changes in the economy, said David Autor, professor of economics at the Massachusetts Institute of Technology.

“It’s a great time to be young and educated. But there’s no clear land of opportunity” for adults who haven’t been to college, said Autor during his keynote presentation.

When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation, said Varian, founding dean of the School of Information at the University of California, Berkeley. Most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.

However, demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude, he said. Demographic trends are also easier to predict, since we already know, aside from immigration and catastrophes, how many 40-year-olds will live in a country 30 years from now.

Comparing the most aggressive expert estimates about the impact of automation on labor supply with demographic trends that point to a workforce reduction, Varian said he found that the demographic effect on the labor market is 53% larger than the automation effect. Thus, real wages are more likely to increase than to decrease when both factors are considered.

Automation’s slow crawl

Why hasn’t automation had a more significant effect on the economy to date? The answer isn’t simple, but there’s one key factor: Jobs are made up of a myriad of tasks, many of which are not easily automated.

“Automation doesn’t generally eliminate jobs,” Varian said. “Automation generally eliminates dull, tedious, and repetitive tasks. If you remove all the tasks, you remove the job. But that’s rare.”

Consider the job of a gardener. Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores. Mowing and watering are easy tasks to automate, but other chores would cost too much to automate or would be beyond the capabilities of machines — so gardeners are still in demand.

Automation doesn’t generally eliminate jobs. Automation generally eliminates dull, tedious, and repetitive tasks. If you remove all the tasks, you remove the job. But that’s rare. –Hal Varian, chief economist, Google

Some jobs, including within the service industry, seem ripe for automation. However, a hotel in Nagasaki, Japan, was the subject of amused news reports when it was forced to “fire” its incompetent robot receptionists and room attendants.

Jobs, unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator, Varian observed. But some of the tasks carried out by elevator operators, such as greeting visitors and guiding them to the right office, have been distributed to receptionists and security guards.

Even the automotive industry, which accounts for roughly half of all robots used by industry, has found that automation has its limits.

“Excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated,” Elon Musk, the founder and chief executive of Tesla Motors, said last year.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

The pace of jobs change

Technology has always changed rapidly, and that’s certainly the case today. However, there’s often a lag between the time a new machine or process is invented and when it reverberates in the workplace.

“The workplace isn’t evolving as fast as we thought it would,” Paul Oyer, a Stanford GSB professor of economics and senior fellow at the Stanford Institute for Economic Policy Research, said during a panel discussion at the forum. “I thought the gig economy would take over, but it hasn’t. And I thought that by now people would find their ideal mates and jobs online, but that was wrong too.”

Consider the leap from steam power to electric power. When electricity first became available, some factories replaced single large steam engines on the factory floor with a single electric motor. That didn’t make a significant change to the nature of factory work, says Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy. But when machinery throughout the factory was electrified, work changed radically.

The rise of the service sector

Employment in some sectors in which employees tend to have less education is still strong, particularly the service sector. As well-paid professionals settle in cities, they create a demand for services and new types of jobs. MIT’s Autor called these occupations “wealth work jobs,” which include employment for everything from baristas to horse exercisers.

The 10 most common occupations in the U.S. include such jobs as retail salespersons, office clerks, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer make the list.

Looming over all of the changes to the labor force is the stark fact that birth rates in the U.S. are at an all-time low, said Varian. As has been widely reported, the aging of the baby boom generation creates demand for service jobs but leaves fewer workers actively contributing labor to the economy.

Even so, the U.S. workforce is in much better shape than other industrialized countries. The so-called dependency ratio — the proportion of people over 65 compared with every 100 people of working age — will be much higher in Japan, Spain, South Korea, Germany, and Italy by 2050. And not coincidentally, said Varian, countries with high dependency ratios are looking the hardest at automating jobs.

As the country ages, society will have to find new, more efficient ways to train and expand the workforce, said the panelists. They will also have to better accommodate the growing number of women in the workforce, many of whom are still held back by family and household responsibilities.

The robots may not be taking over just yet, but advances in artificial intelligence and machine learning will eventually become more of a challenge to the workforce. Still, it’s heartening to be reminded that, for now, “humans are underrated.”

Editor’s note: This piece was originally published by Stanford Graduate School of Business.

The post Fears of job-stealing robots are misplaced, say experts appeared first on The Robot Report.

FDA warns about using surgical robots for cancer treatment



Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register


The U.S. Food and Drug Administration (FDA) has issued a warning against the use of surgical robots in mastectomies and other surgeries for the treatment or prevention of cancer.

The FDA said the safety and effectiveness of surgical robots have not been established for use in mastectomies or for surgeries to prevent or treat breast and other cancers. The FDA said it “encourages health care providers who use robotically-assisted surgical devices to have specialized training and practice in their use.”

The warning follows the publication of a study in the New England Journal of Medicine that said robot-assisted cervical cancer surgeries had reduced survival than open abdominal radical hysterectomies among women with early-stage cervical cancer. The journal article noted that other researchers have reported no statistically significant difference in long-term survival when these types of surgical procedures are compared.

Separately, a New Jersey hospital suspended robot-assisted mastectomy after it publicized two such procedures performed by one of its surgeons, concluding the surgery needs further review, according to a report in the Asbury Park Press. Long Island, N.Y.-based Northwell Health also touted the robot-assisted double mastectomy performed on a woman who carried the BRCA gene, which is associated with developing breast cancer.

Advocates of such robot-assisted surgery say it’s less invasive and causes less pain and fewer scars for patients. Critics such as Hooman Noorchashm, a Philadelphia-area surgeon-turned-patient-advocate, say that robot-assisted procedures are very different from standard surgery.

“If you’re dramatically changing existing standards of care… you better demonstrate that it’s not inferior to existing standards before you advertise it,” Noorchashm said in an interview with Medical Design & Outsourcing, a sister website of The Robot Report.

Demonstrating non-inferiority would traditionally require clinical trials comparing the outcomes of robotic-assisted versus standard surgery. Such studies on breast cancer patients are underway in France and Italy, but none have occurred in the United States. French investigators are comparing outcomes using traditional surgery and Intuitive Surgical’s da Vinci Robot Xi, according to the trial’s listing on clinicaltrials.gov.

“To date, the FDA’s evaluation of robotically assisted surgical devices has generally focused on determining whether the complication rate at 30 days is clinically comparable to other surgical techniques,” the agency’s warning says. “To evaluate robotically-assisted surgical devices for use in the prevention or treatment of cancer, including breast cancer, the FDA anticipates these uses would be supported by specific clinical outcomes, such as local cancer recurrence, disease-free survival, or overall survival at time periods much longer than 30 days.”

The agency has not granted marketing authorization for any robot-assisted surgical device for use in the United States for the prevention or treatment of cancer, including breast cancer. The labeling for robotic surgical devices that are legally marketed in the United States includes statements that cancer treatment outcomes using the device have not been evaluated by FDA.

“Health care providers and patients should consider the benefits, risks and alternatives to robotically assisted surgical procedures and consider this information to make informed treatment decisions,” the agency’s warning said.

The industry leader in robot-assisted surgery, Intuitive promotes the use of its robots for hysterectomies, both benign and cancer-related, on its website. The company does not list mastectomy among da Vinci’s recommended procedures and FDA did not mention Intuitive or any other surgical robotics company in its warning.

“Minimally invasive surgical devices, including robotic-assisted surgical systems and laparoscopic surgical tools, are cleared by the FDA for specific procedures, such as prostatectomy and hysterectomy, not specifically for cancer prevention or treatment,” Intuitive Surgical said in an email. “To date, there are more than 15,000 peer-reviewed articles that, in aggregate, support the safety and effectiveness of robotic-assisted surgery. We value the FDA’s role in protecting and promoting public health, and will continue to look to the agency for guidance as we develop innovative solutions for surgeons and their patients.”

A Danish study published this week in JAMA Surgery said that the risk of severe complications among early-stage endometrial cancer patients who underwent surgery after the national introduction of minimally invasive robotic surgery was significantly reduced. The patients with better outcomes included those whose procedures were minimally invasive laparoscopic surgeries and those who underwent robot-assisted surgeries. Patients were followed for 90 days post-procedure.

Intuitive Surgical recommended that surgeons discuss all treatment options with their patients and that patients ask surgeons about their training, experience, and patient outcomes.

Noorchashm’s wife, Dr. Amy Reed, died in 2017, four years after a myomectomy of fibroid tumors using a power morcellator. The tumors turned out to be a malignant form of cancer called uterine sarcoma that’s difficult to distinguish from benign tumors.

Noorchashm said he’s not anti-innovation but wants surgeons to proceed with caution. “They should slow the hell down,” he said.

Editor’s Note: This article was originally published on our sister website Medical Design & Outsourcing.

The post FDA warns about using surgical robots for cancer treatment appeared first on The Robot Report.

Build better robots by listening to customer backlash

In the wake of the closure of Apple’s autonomous car division (Project Titan) this week, one questions if Steve Jobs’ axiom still holds true. “Some people say, ‘Give the customers what they want.’ But that’s not my approach. Our job is to figure out what they’re going to want before they do,” declared Jobs, who continued with an analogy: “I think Henry Ford once said, ‘If I’d asked customers what they wanted, they would have told me, ‘a faster horse!’” Titan joins a growing graveyard of autonomous innovations, which is filled with the tombstones of BaxterJiboKuri and many broken quadcopters. If anything holds true, not every founder is Steve Jobs or Henry Ford and listening to public backlash could be a bellwether for success.

Adam Jonas of Morgan Stanley announced on Jan. 9, 2019 from the Consumer Electronic Show (CES) floor, “It’s official. AVs are overhyped. Not that the safety, economic, and efficiency benefits of robotaxis aren’t valid and noble. They are. It’s the timing… the telemetry of adoption for L5 cars without safety drivers expected by many investors may be too aggressive by a decade… possibly decades.”

The timing sentiment is probably best echoed by the backlash by the inhabitants of Chandler, Arizona who have been protesting vocally, even resorting to violence, against Waymo’s self-driving trials on their streets. This rancor came to a head in August when a 69-year-old local pointed his pistol at the robocar (and its human safety driver).

In a profile of the Arizona beta trial, The New York Times interviewed some of the loudest advocates against Waymo in the Phoenix suburb. Erik and Elizabeth O’Polka expressed frustration with their elected leaders in turning their neighbors and their children into guinea pigs for artificial intelligence.

Elizabeth adamantly decried, “They didn’t ask us if we wanted to be part of their beta test.” Her husband strongly agreed: “They said they need real-world examples, but I don’t want to be their real-world mistake.” The couple has been warned several times by the Chandler police to stop attempting to run Waymo cars off the road. Elizabeth confessed to the Times, “that her husband ‘finds it entertaining to brake hard’ in front of the self-driving vans, and that she herself ‘may have forced them to pull over’ so she could yell at them to get out of their neighborhood.” The reporter revealed that the backlash tensions started to boil “when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac.”

Rethink's Baxter robot was the subject of a user backlash because of design limitations.

The deliberate sabotaging by the O’Polkas could be indicative of the attitudes of millions of citizens who feel ignored by the speed of innovation. Deployments that run oblivious to this view, relying solely on the excitement of investors and insiders, ultimately face backlash when customers flock to competitors.

In the cobot world, the early battle between Rethink Robotics and Universal Robots (UR) is probably one of the most high-flying examples of tone-deaf invention by engineers. Rethink’s eventual demise was a classic case of form over function with a lot of hype sprinkled on top.

Rodney Brooks‘ collaborative robotics enterprise raised close to $150 million in its short decade-long existence. The startup rode the coattails of fame of its co-founder, who is often referred to as the godfather of robotics, before ever delivering a product.

Dedicated Rethink distributor, Dan O’Brien, recalled, “I’ve never seen a product get so much publicity. I fell in love with Rethink in 2010.” Its first product, Baxter, released in 2012 and promised to bring safety, productivity, and a little whimsy to the factory floor. The robot stood at around six feet tall with two bright colored red arms that were connected to an animated screen complete with friendly facial expressions.

At the same time, Rethink’s robots were not able to perform as advertised in industrial environments, leading to a backlash and slow adoption. The problem stemmed from Brooks’ insistence in licensing their actuation technology, “Series Elastic Actuators (SEAs),” from former employer MIT instead of embracing the leading actuator, Harmonic Drive, for its mobility. Users demanded greater exactness in their machines that competitors such as UR, a Harmonic customer, took the helm in delivering.

Universal Robots' cobot arms don't have the problems that led to a backlash against Rethink's robots

Universal Robots’ cobots perform better than those of the late Rethink Robotics.

The backlash to Baxter is best illustrated by the comments of Steve Leach, president of Numatic Engineering, an automation integrator. In 2010, Leach hoped that Rethink could be “the iPhone of the industrial automation world.”

However, “Baxter wasn’t accurate or smooth,” said Leach, who was dismayed after seeing the final product. “After customers watched the demo, they lost interest because Baxter was not able to meet their needs.”

“We signed on early, a month before Baxter was released, and thought the software and mechanics would be refined. But they were not,” sighed Leach. In the six years since Baxter’s disappointing launch Rethink did little to address the SEAs problem. Most of the 1,000 Baxters sold by Rethink were delivered to academia, not the commercial industry.

By contrast, Universal booked more 27,000 robots since its founding in 2005. Even Leach, who spent a year passionately trying to sell a single Baxter unit, switched to UR and sold his first one within a week. Leach elaborated, “From the ground up, UR’s firmware and hardware were specifically developed for industrial applications and met the expectations of those customers. That’s really where Rethink missed the mark.”

This garbage can robot seen at CES was designed to be cheap and avoid consumer backlash.

As machines permeate human streets, factories, offices, and homes, building a symbiotic relationship between intended operators and creators is even more critical. Too often, I meet entrepreneurs who demonstrate concepts with little input from potential buyers. This past January, the aisles of CES were littered with such items, but the one above was designed with a potential backlash in mind.

Simplehuman, the product development firm known for its elegantly designed housewares, unveiled a $200 aluminum robot trash can. This is part of a new line of Simplehuman’s own voice-activated products, potentially competing with Amazon Alexa. In the words of its founder, Frank Yang, “Sometimes, it’s just about pre-empting the users’ needs, and including features we think they would appreciate. If they don’t, we can always go back to the drawing board and tweak the product again.”

To understand the innovation ecosystem in the age of hackers join the next RobotLab series on “Cybersecurity & Machines” with John Frankel of ffVC and Guy Franklin of SOSA – February 12th in New York City, seating is limited so RSVP today!

The post Build better robots by listening to customer backlash appeared first on The Robot Report.

Reinforcement learning, YouTube teaching robots new tricks

The sun may be setting on what David Letterman would call “Stupid Robot Tricks,” as intelligent machines are beginning to surpass humans in a wide variety of manual and intellectual pursuits. In March 2016, Google’s DeepMind software program AlphaGo defeated the reining Go champion, Lee Sedol. Go, a Chinese game that originated more than 3,000…

The post Reinforcement learning, YouTube teaching robots new tricks appeared first on The Robot Report.

How robotics & IoT help sustain triple bottom line

Sustainability in business has become a popular topic as companies increasingly recognize the need to have a firm grip on their triple bottom line – financial, social and environmental. Financial, social and environmental risks and opportunities abound in business, but creating an operations model that speaks directly to aspects of profits, people, and planet is…

The post How robotics & IoT help sustain triple bottom line appeared first on The Robot Report.