Understand.ai accelerates image annotation for self-driving cars

Understand.AI accelerates image annotation for self-driving cars

Using processed images, algorithms learn to recognize the real environment for autonomous driving. Source: understand.ai

Autonomous cars must perceive their environment accurately to move safely. The corresponding algorithms are trained using a large number of image and video recordings. Single image elements, such as a tree, a pedestrian, or a road sign must be labeled for the algorithm to recognize them. Understand.ai is working to improve and accelerate this labeling.

Understand.ai was founded in 2017 by computer scientist Philip Kessler, who studied at the Karlsruhe Institute of Technology (KIT), and Marc Mengler.

“An algorithm learns by examples, and the more examples exist, the better it learns,” stated Kessler. For this reason, the automotive industry needs a lot of video and image data to train machine learning for autonomous driving. So far, most of the objects in these images have been labeled manually by human staffers.

“Big companies, such as Tesla, employ thousands of workers in Nigeria or India for this purpose,” Kessler explained. “The process is troublesome and time-consuming.”

Accelerating training at understand.ai

“We at understand.ai use artificial intelligence to make labeling up to 10 times quicker and more precise,” he added. Although image processing is highly automated, final quality control is done by humans. Kessler noted that the “combination of technology and human care is particularly important for safety-critical activities, such as autonomous driving.”

The labelings, also called annotations, in the image and video files have to agree with the real environment with pixel-level accuracy. The better the quality of the processed image data, the better is the algorithm that uses this data for training.

“As training images cannot be supplied for all situations, such as accidents, we now also offer simulations based on real data,” Kessler said.

Although understand.ai focuses on autonomous driving, it also plans to process image data for training algorithms to detect tumors or to evaluate aerial photos in the future. Leading car manufacturers and suppliers in Germany and the U.S. are among the startup’s clients.

The startup’s main office is in Karlsruhe, Germany, and some of its more than 50 employees work at offices in Berlin and San Francisco. Last year, understand.ai received $2.8 million (U.S.) in funding from a group of private investors.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Building interest in startups and partnerships

In 2012, Kessler started to study informatics at KIT, where he became interested in AI and autonomous driving when developing an autonomous model car in the KITCar students group. Kessler said his one-year tenure at Mercedes Research in Silicon Valley, where he focused on machine learning and data analysis, was “highly motivating” for establishing his own business.

“Nowhere else can you learn more within a shortest period of time than in a startup,” said Kessler, who is 26 years old. “Recently, the interest of big companies in cooperating with startups increased considerably.”

He said he thinks that Germany sleepwalked through the first wave of AI, in which it was used mainly in entertainment devices and consumer products.

“In the second wave, in which artificial intelligence is applied in industry and technology, Germany will be able to use its potential,” Kessler claimed.

Robotic catheter brings autonomous navigation into human body

 

Robotic catheter brings autonomous navigation into the human body

Concentric tube robot. In a recent demo, robotic catheter autonomously found its way to a leaky heart valve. Source: Pediatric Cardiac Bioengineering Lab, Department of Cardiovascular Surgery, Boston Children’s Hospital, Harvard Medical School

BOSTON — Bioengineers at Boston Children’s Hospital said they successfully demonstrated for the first time a robot able to navigate autonomously inside the body. In a live pig, the team programmed a robotic catheter to find its way along the walls of a beating, blood-filled heart to a leaky valve — without a surgeon’s guidance. They reported their work today in Science Robotics.

Surgeons have used robots operated by joysticks for more than a decade, and teams have shown that tiny robots can be steered through the body by external forces such as magnetism. However, senior investigator Pierre Dupont, Ph.D., chief of Pediatric Cardiac Bioengineering at Boston Children’s, said that to his knowledge, this is the first report of the equivalent of a self-driving car navigating to a desired destination inside the body.

Pierre Dupont

Pierre Dupont, chief of Pediatric Cardiac Bioengieering at Boston Children’s Hospital

Dupont said he envisions autonomous robots assisting surgeons in complex operations, reducing fatigue and freeing surgeons to focus on the most difficult maneuvers, improving outcomes.

“The right way to think about this is through the analogy of a fighter pilot and a fighter plane,” he said. “The fighter plane takes on the routine tasks like flying the plane, so the pilot can focus on the higher-level tasks of the mission.”

Touch-guided vision, informed by AI

The team’s robotic catheter navigated using an optical touch sensor developed in Dupont’s lab, informed by a map of the cardiac anatomy and preoperative scans. The touch sensor uses artificial intelligence and image processing algorithms to enable the catheter to figure out where it is in the heart and where it needs to go.

For the demo, the team performed a highly technically demanding procedure known as paravalvular aortic leak closure, which repairs replacement heart valves that have begun leaking around the edges. (The team constructed its own valves for the experiments.) Once the robotic catheter reached the leak location, an experienced cardiac surgeon took control and inserted a plug to close the leak.

In repeated trials, the robotic catheter successfully navigated to heart valve leaks in roughly the same amount of time as the surgeon (using either a hand tool or a joystick-controlled robot).

Biologically inspired navigation

Through a navigational technique called “wall following,” the robotic catheter’s optical touch sensor sampled its environment at regular intervals, in much the way insects’ antennae or the whiskers of rodents sample their surroundings to build mental maps of unfamiliar, dark environments. The sensor told the catheter whether it was touching blood, the heart wall or a valve (through images from a tip-mounted camera) and how hard it was pressing (to keep it from damaging the beating heart).

Data from preoperative imaging and machine learning algorithms helped the catheter interpret visual features. In this way, the robotic catheter advanced by itself from the base of the heart, along the wall of the left ventricle and around the leaky valve until it reached the location of the leak.

“The algorithms help the catheter figure out what type of tissue it’s touching, where it is in the heart, and how it should choose its next motion to get where we want it to go,” Dupont explained.

Though the autonomous robot took a bit longer than the surgeon to reach the leaky valve, its wall-following technique meant that it took the longest path.

“The navigation time was statistically equivalent for all, which we think is pretty impressive given that you’re inside the blood-filled beating heart and trying to reach a millimeter-scale target on a specific valve,” said Dupont.

He added that the robot’s ability to visualize and sense its environment could eliminate the need for fluoroscopic imaging, which is typically used in this operation and exposes patients to ionizing radiation.

Robot ercutaneous access to the heart, from Pediatric Cardiac Bioengineering Lab

Robotic catheter enters internal jugular vein and navigates through the vasculature into the right atrium. Source: Pediatric Cardiac Bioengineering Lab

A vision of the future?

Dupont said the project was the most challenging of his career. While the cardiac surgical fellow, who performed the operations on swine, was able to relax while the robot found the valve leaks, the project was taxing for Dupont’s engineering fellows, who sometimes had to reprogram the robot mid-operation as they perfected the technology.

“I remember times when the engineers on our team walked out of the OR completely exhausted, but we managed to pull it off,” said Dupont. “Now that we’ve demonstrated autonomous navigation, much more is possible.”

Some cardiac interventionalists who are aware of Dupont’s work envision using robots for more than navigation, performing routine heart-mapping tasks, for example. Some envision this technology providing guidance during particularly difficult or unusual cases or assisting in operations in parts of the world that lack highly experienced surgeons.

As the U.S. Food and Drug Administration begins to develop a regulatory framework for AI-enabled devices, Dupont said that autonomous surgical robots all over the world could pool their data to continuously improve performance over time — much like self-driving vehicles in the field send their data back to Tesla to refine its algorithms.

“This would not only level the playing field, it would raise it,” said Dupont. “Every clinician in the world would be operating at a level of skill and experience equivalent to the best in their field. This has always been the promise of medical robots. Autonomy may be what gets us there.”

Boston Children's Hospital

Boston Children’s Hospital in the Longwood Medical Area. Photo by Jenna Lang.

About the paper

Georgios Fagogenis, PhD, of Boston Children’s Hospital was first author on the paper. Coauthors were Margherita Mencattelli, PhD, Zurab Machaidze, MD, Karl Price, MaSC, Viktoria Weixler, MD, Mossab Saeed, MB, BS, and John Mayer, MD of Boston Children’s Hospital; Benoit Rosa, PhD, of ICube, Universite? de Strasbourg (Strasbourg, France); and Fei-Yi Wu, MD, of Taipei Veterans General Hospital, Taipei, Taiwan. For more on the technology, contact TIDO@childrenshospital.org.

The study was funded by the National Institutes of Health (R01HL124020), with partial support from the ANR/Investissement d’avenir program. Dupont and several of his coauthors are inventors on U.S. patent application held by Boston Children’s Hospital that covers the optical imaging technique.

About Boston Children’s Hospital

Boston Children’s Hospital, the primary pediatric teaching affiliate of Harvard Medical School, said it is home to the world’s largest research enterprise based at a pediatric medical center. Its discoveries have benefited both children and adults since 1869. Today, more than 3,000 scientists, including 8 members of the National Academy of Sciences, 18 members of the National Academy of Medicine and 12 Howard Hughes Medical Investigators comprise Boston Children’s research community.

Founded as a 20-bed hospital for children, Boston Children’s is now a 415-bed comprehensive center for pediatric and adolescent health care. For more, visit the Vector and Thriving blogs and follow it on social media @BostonChildrens@BCH_Innovation, Facebook and YouTube.

Programmable duAro robot enables automation at companies of all sizes

It’s a common misconception that integrating robots means spending a lot to completely overhaul production lines and start from scratch. In 2016, Kawasaki introduced the highly innovative human-friendly industrial SCARA robot named duAro whose mobile design and safety functionality make it suitable for companies of any size. Integrating the duAro into a manufacturing process is a relatively simple change that can benefit the bottom line and relieve employees from performing menial tasks.

The duAro is the first dual-armed horizontal articulated robot to operate on a single axis. This configuration enables the robot to perform coordinated movements, much like a human, making it suitable for applications such as small-part inspection, assembly, material handling, material removal and machine tending. As the robot is designed to fit into a single-person space, it can easily be deployed without modifications to any assembly or manufacturing line. The mobile base on which the dual-arms are placed also accommodates the controller, allowing the user to move the unit to any location desired.

The duAro’s design also reflects the need to keep its human co-workers safe. Low-power motors, a soft body, speed and work zone monitoring, and a deceleration function allows the duAro to safely collaborate with humans in work operations. In the unlikely event of a collision, the collision detection function instantaneously stops the robot’s movement. The duAro robot isn’t only safe but it’s also smart. The direct teach function allows for the user to teach the robot tasks by hand guiding its arms. In addition, the robot can be programmed through a tablet terminal by entering numerical values indicating the direction and distance of each movement. This user-friendly robot with a small installation footprint and mobile base is also suitable for high mix, low volume production.

Two Kawasaki dual-arm duAro robots were installed at a Tier 1 auto parts supplier to work together in a machine tending application. With the implementation of these two robots the supplier was able to double their throughput and eliminate errors. This turnkey solution took about 9 weeks to implement from initial design to commissioning, and an additional week was used to train employees on how to operate the system. The design, build and commissioning of a single unit to a turnkey system can range anywhere from a week to 2-3 months. With a base price of $33,000, the duAro is a safe, affordable, easy to operate, collaborative robot that can meet the demands for flexible manufacturing.

Visit Kawasaki Robotics (USA) Inc. next week at Automate 2019 (booth 7340) or KawasakiRobotics.com.

The post Programmable duAro robot enables automation at companies of all sizes appeared first on The Robot Report.

OnRobot grippers support more cobots with new I/O converter

OnRobot Digital I/O Converter

OnRobot has launched a Digital I/O converter that enables its RG2, RG6, Gecko, and VG10 grippers to seamlessly integrate with a wider range of collaborative robot arms. The Digital I/O Converter enables the cobot arms to work with the OnRobot grippers with minimal need for programming, resulting in faster switch times between multiple tasks.

OnRobot said this leads to an increase in production as the cobots can get back to work faster. OnRobot said the Digital I/O Converter works with the following brands and cobot arms:

OnRobot

Of course, OnRobot also supports Universal Robots, the leading collaborative robotics company in the world. OnRobot’s RG2, RG6 and VG10 are part of the UR+ program. OnRobot’s HEX Force/Torque sensing package is also part of the UR+ program.

As different cobot arms understand I/O signals differently, OnRobot said the I/O Converter is able to convert NPN to PNP signals and vice versa. What does that mean for the robot operator? PNP sensors, sometimes known as “sourcing sensors” because they source positive power to the output, and NPN sensors, oftentimes called “sinking sensors” because they sink the ground to the output, are the technical terms for the type of transistor used to switch the output.

With the I/O converter, programmers don’t have to worry about the robots not understanding the signals received. The Digital I/O Converter also includes an adapter plate for converting the UR-type A flanges mechanically to other robot flanges.

The Digital I/O converter can be ordered at local sales offices. Additional information, datasheets, and manuals detailing mounting, cable routing, software configurations, and electrical connection can be downloaded here.

Robotics Summit & Expo 2019 logoKeynotes | Agenda | Speakers | Exhibitors | Register

Established in 2015, OnRobot merged with Perception Robotics and OptoForce in 2018, followed by the acquisition of Purple Robotics. Purple Robotics was launched in 2017 by three former Universal Robots (UR) employees with 18-plus years experience working on the UR3, UR5, and UR10 cobots. Co-founders Lasse Kieffer, Henrik Tillitz Hansen, and Peter Nadolny Madsen, who describe themselves as “three Danish super-nerds,” found 40 partners in 25 countries just three months after launching the PR10.

OnRobot kicked off 2019 by shipping pre-orders of its Gecko Gripper that uses millions of micro-scaled fibrillar stalks that adhere to a surface using powerful van der Waals forces — the same way that geckos climb.

The post OnRobot grippers support more cobots with new I/O converter appeared first on The Robot Report.

Festo’s Bionic robots merge pneumatics, artificial intelligence

Festo's Bionic pneumatic robotics meet artificial intelligence

Bionic SoftHand from Festo plays Rock-Paper-Scissors. Credit: Philipp Freudigmann

Whether it’s grabbing, holding or turning, touching, typing or pressing — in everyday life, we use our hands as a matter of course for the most diverse tasks. In that regard, the human hand, with its unique combination of power, dexterity, and fine motor skills, is a true miracle tool of nature. What could be more natural than equipping robots in collaborative workspaces with a gripper that is modeled after this example from nature and solves various tasks by learning with artificial intelligence? Festo’s Bionic series does just that.

Festo announced that it will show its BionicSoftHand pneumatic robot hand at Hannover Messe 2019. Combined with the BionicSoftArm, a pneumatic lightweight robot, these future concepts are suitable for human-robot collaboration.

The BionicSoftHand is pneumatically operated so that it can interact safely and directly with people. Unlike the human hand, the BionicSoftHand has no bones. Its fingers consist of flexible bellows structures with air chambers.

The bellows are enclosed in the fingers by a special 3D textile coat knitted from both, elastic, and high-strength threads. Thanks to this soft robotics material, it is possible to determine exactly where the structure expands and generates power and where it is prevented from expanding. This makes it light, flexible, adaptable, and sensitive, yet capable of exerting strong forces.

AI-guided Bionic grasping

The methods for machines to learn are comparable with those of humans. They require positive or negative feedback to their actions in order to classify and learn from them. BionicSoftHand uses this method of reinforcement learning.

This means instead of imitating a specific action, the hand is merely given a goal. It uses trial and error to achieve its goal. Based on received feedback, the Bionic gripper gradually optimizes its actions until the task is finally solved.

Specifically, the BionicSoftHand can rotate a 12-sided cube so that a previously defined side ends up on top. The necessary movement strategy is taught in a virtual environment with the aid of a digital twin, which is created with the help of data from a depth-sensing camera and computer vision algorithms.

Proportional piezo valves for precise control

To minimize the effects of tubing, Festo’s developers have specially designed a small, digitally controlled valve terminal, which is mounted directly on the BionicSoftHand. This means that the tubes for controlling the gripper fingers do not have to be pulled through the entire robot arm.

Thus, the BionicSoftHand can be quickly and easily connected and operated with only one tube each for supply air and exhaust air. With the proportional piezo valves used, the movements of the fingers can be precisely controlled.

The days of strict separation between factory workers and automation are passing, thanks to collaborative robots. As their workspaces converge, humans and machines will be able to work simultaneously on the same workpiece or component — without having to be shielded from each other for safety reasons.

The BionicSoftArm is a compact further development of Festo’s BionicMotionRobot, whose range of applications has been significantly expanded. Thanks to its modular design, the Bionic arm can be combined with up to seven pneumatic bellows segments and rotary drives. This guarantees maximum flexibility in terms of reach and mobility. The arm can work around obstacles even in the tightest of spaces if necessary.

At the same time, it is completely flexible and can work safely with people. Direct human-robot collaboration is possible with the BionicSoftArm, as well as its use in classic SCARA applications, such as pick-and-place tasks.

Flexible application possibilities

The modular robot arm can be used for a wide variety of applications, depending on the design and mounted gripper. Thanks to its flexible kinematics, the BionicSoftArm can interact directly and safely with humans.

At the same time, the kinematics make it easier for the Bionic arm to adapt to different tasks at various locations in production environments. The elimination of costly safety devices such as cages and light barriers shortens conversion times and thus enables flexible use – completely in accordance with adaptive and economical production.

BionicFinWave: Underwater robot with unique fin drive

Nature teaches us impressively, how optimal drive systems for certain swimming movements should look. To move forward, the marine planarian and sepia create a continuous wave with their fins, which advances along their entire length.

For the BionicFinWave, the bionics team was inspired by this undulating fin movement. The undulation pushes the water backwards, creating a forward thrust. This principle allows the BionicFinWave to maneuver forwards or backwards through an acrylic tube system.

The BionicFinWave’s two side fins are completely cast out of silicone and do not require struts or other supporting elements. The two fins are attached to the left and right of nine small lever arms, which in turn are powered by two servo motors. Two adjacent crankshafts transmit the force to the levers so that the two fins can be moved individually to generate different shaft patterns. They are particularly suitable for slow and precise locomotion and whirl up less water than, for example, a screw drive.

A cardan joint is located between each lever segment to ensure that the Bionic robot’s crankshafts are flexible. For this purpose, the crankshafts including the joints and the connecting rod are made of plastic in one piece using the 3D printing process.

Intelligent interaction of a wide variety of components

The remaining elements in the BionicFinWave’s body are also 3D-printed, which enables its complex geometries in the first place. With their cavities, they act as flotation units.

At the same time, the entire control and regulation technology are watertight, safely installed and synchronized in a very tight space. The Festo Bionic Learning Network has continued its innovative approach to robotics.

The post Festo’s Bionic robots merge pneumatics, artificial intelligence appeared first on The Robot Report.

Chinese mobile robot maker Geek+ coming to America

Chinese artificial intelligence and robot maker Geek+ (Beijing Geekplus Technology) will be demonstrating its line of picking, moving and sortation robots April 9-12 in Atlanta, GA at MODEX, the largest supply-chain trade show in the Americas. Geek+ is a leading provider — and China’s #1 supplier — of warehousing and logistics solutions in China and…

The post Chinese mobile robot maker Geek+ coming to America appeared first on The Robot Report.