6 common mistakes when setting up safety laser scanners


Having worked in industrial automation for most of my career, I’d like to think that I’ve built up a wealth of experience in the field of industrial safety sensors. Familiar with safety laser scanners for over a decade, I have been involved many designs and installations.

I currently work for SICK (UK) Ltd., which invented the safety laser scanner, and I continually see people making the same mistakes time and time again. This short piece highlights, in my opinion, the most common of them.

1. Installation and mounting: Thinking about safety last

If you are going to remember just one point, then this is it. Too many times have I been present at an “almost finished” machine and asked, “Right, where can I stick this scanner?”

Inevitably, what ends up happening is that blind spots (shadows created by obstacles) become apparent all over the place. This requires mechanical “bodges” and maybe even additional scanners to cover the complete area when one scanner may have been sufficient if the cell was designed properly in the first place.

In safety, designing something out is by far the most cost-effective and robust solution. If you know you are going to be using a safety laser scanner, then design it in from the beginning — it could save you a world of pain. Consider blind zones, coverage and the location of hazards.

This also goes for automated guided vehicles (AGVs). For example, the most appropriate position to completely cover an AGV is to have two scanners adjacent to each other on the corners integrated into the vehicle (See Figure 1).

Figure 1: Typical AGV scanner mounting and integration. | Credit: SICK

2. Incorrect multiple sampling values configured

An often misunderstood concept, multiple sampling indicates how often an object has to be scanned in succession before a safety laser scanner reacts. By default and out of the box, this value is usually x2 scans, which is the minimum value. However, this value may range from manufacturer to manufacturer. A higher multiple sampling value reduces the possibility that insects, weld sparks, weather (for outdoor scanners) or other particles cause the machine to shut down.

Increasing the multiple sampling can make it possible to increase a machine’s availability, but it can also have negative effects on the application. Increasing the number of samples is basically adding an OFF-Delay to the system, meaning that your protective field may need to be bigger due to the increase in the total response time.

If a scanner has a robust detection algorithm, then you shouldn’t have to increase this value too much but when this value is changed you could be creating a hazard due to lack of effectiveness of the protective device.

If the value is changed, you should make a note of the safety laser scanner’s new response time and adjust the minimum distance from the hazardous point accordingly to ensure it remains safe.

Furthermore, in vertical applications, if the multiple sampling is set too high, then it may be possible for a person to pass through the protective field without being detected — so care must be taken. For one our latest safety laser scanners, the microScan3, we provide the following advice:

Figure 2: Recommended multiple sampling values. | Credit: SICK

3. Incorrect selection of safety laser scanner

The maximum protective field that a scanner can facilitate is an important feature, but this value alone should not be a deciding factor on whether the scanner is suitable for an application. A safety laser scanner is a Type 3 device, according to IEC 61496, and an Active Opto-Electric Protective Devices responsive to Diffuse Reflection (AOPDDR). This means that it depends on diffuse reflections off of objects. Therefore, to achieve longer ranges, scanners must be more sensitive. In reality, this means that sometimes scanning angle but certainly detection robustness can be sacrificed.

This could lead to a requirement for an increasing number multiple samples and maybe lack of angular resolution. The increased response times and lack of angle could mean that larger protective fields are required and even additional scanners — even though you bought the longer range one. A protective field should be as large as required but as small as possible.

A shorter-range scanner may be more robust than its longer-range big brother and, hence, keep the response time down, reduce the footprint, reduce cost and eliminate annoying false trips.

4. Incorrect resolution selected

The harmonized standard EN ISO 13855 can be used for the positioning of safeguards with respect to the approach speeds of the human body. Persons or parts of the body to be protected may not be recognized or recognized in time if the positioning or configuration is incorrect. The safety laser scanner should be mounted so that crawling beneath, climbing over and standing behind the protective fields is not possible.

If crawling under could create a hazardous situation, then the safety laser scanner should not be mounted any higher than 300 mm. At this height, a resolution of up 70 mm can be selected to ensure that it is possible to detect a human leg. However, it is sometimes not possible to mount the safety laser scanner at this height. If mounted below 300 mm, then a resolution of 50 mm should be used.

It is a very common mistake to mount the scanner lower than 300 mm and leave the resolution on 70mm. Reducing the resolution may also reduce the maximum protective field possible on a safety laser scanner so it is important to check.

5. Ambient/environmental conditions were not considered

Sometimes safety laser scanners just aren’t suitable in an application. Coming from someone who sells and supports these devices, that is a difficult thing to say. However, scanners are electro-sensitive protective equipment and infrared light can be a tricky thing to work with. Scanners have become very robust devices over the last decade with increasingly complex detection techniques (SafeHDDM by SICK) and there are even safety laser scanners certified to work outdoors (outdoorScan3 by SICK).

However, there is a big difference between safety and availability and expectations need to be realistic right from the beginning. A scanner might not maintain 100% machine availability if there is heavy dust, thick steam, excessive wood chippings, or even dandelions constantly in front of the field of view. Even though the scanner will continue to be safe and react to such situations, trips due to ambient conditions may not be acceptable to a user.

For extreme environments, the following question should be asked: “What happens when the scanner is not available due to extreme conditions?” This can be especially true in outdoor application in heavy rain, snow or fog. A full assessment of the ambient conditions and even potentially proof tests should be carried out. This particular issue can become a very difficult, and sometimes impossible, and expensive thing to fix.

6. Non-safe switching of field sets

A field set in a safety laser scanner can consist of multiple different field types. For example, a field set could consist of 4 safe protection fields (Field Set 1) or it could consist of 1 safe protective field, two non-safe warning fields and a safe detection field (Field set 2). See Figure 3.

Figure 3: Safety laser scanner field sets. | Credit: SICK

A scanner can store lots of different fields that can be selected using either hardwired inputs or safe networked inputs (CIP Safety, PROFISAFE, EFI Pro). This is a feature that industry finds very useful for both safety and productivity in Industry 4.0 applications.

However, the safety function (as per EN ISO 13849/EN 62061) for selecting the field set at any particular point in time should normally have the same safety robustness (PL/SIL) as the scanner itself. A safety laser scanner can be used in safety functions up to PLd/SIL2.

If we look at AGVs, for example, usually two rotary encoders are used to switch between fields achieving field switching up to PLe/SIL3. There are now also safety rated rotary encoders that can be used alone to achieve field switching to PLd/SIL2.

However, sometimes the safety of the mode selection is overlooked. For example, if a standard PLC or a single channel limit switch is used for selecting a field set, then this would reduce the PL/SIL of the whole system to possibly PLc or even PLa. An incorrect selection of field set could mean that an AGV is operating with small protective field in combination with a high speed and hence long stopping time, creating a hazardous situation.

Summary

Scanners are complex devices and have been around for a long time with lots of choice in the market with regards to range, connectivity, size and robustness. There are also a lot of variables to consider when designing a safety solution using scanners. If you are new to this technology then it is a good idea to contact the manufacturer for advice on the application of these devices.

Here at SICK we offer complimentary services to our customers such as consultancy, on-site engineering assistance, risk assessment, safety concept and safety verification of electrosensitive protective equipment (ESPEs). We are always happy to answer any questions. If you’d like to get in touch then please do not hesitate.

About the Author

Dr. Martin Kidman is a Functional Safety Engineer and Product Specialist, Machinery Safety at SICK (UK) Ltd. He received his Ph.D. at the University of Liverpool in 2010 and has been involved in industrial automation since 2006 working for various manufacturers of sensors.

Kidman has been at SICK since January 2013 as a product specialist for machinery safety providing services, support and consultancy for industrial safety applications. He is a certified FS Engineer (TUV Rheinland, #13017/16) and regularly delivers seminars and training courses covering functional safety topics. Kidman has also worked for a notified body testing to the Low Voltage Directive in the past.

The post 6 common mistakes when setting up safety laser scanners appeared first on The Robot Report.

Vegebot robot applies machine learning to harvest lettuce

Vegebot, a vegetable-picking robot, uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop.

A team at the University of Cambridge initially trained Vegebot to recognize and harvest iceberg lettuce in the laboratory. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, a local fruit and vegetable co-operative.

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The researchers published their results in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the U.K., iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot.” — Josie Hughes, University of Cambridge report co-author

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.

The Vegebot first identifies the “target” crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested. Finally, it cuts the lettuce from the rest of the plant without crushing it so that it is “supermarket ready.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

Vegebot designed for lettuce-picking challenge

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image. Then for each lettuce, the robot classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

Vegebot in the field

Vegebot uses machine vision to identify heads of iceberg lettuce. Credit: University of Cambridge

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognize healthy lettuce in the lab, the team then trained it in the field, in a variety of weather conditions, on thousands of real lettuce heads.

A second camera on the Vegebot is positioned near the cutting blade, and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In the future, robotic harvesters could help address problems with labor shortages in agriculture. They could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded.

However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6 million ($8.26 million U.S.) for the new CDT, which will support at least 50 Ph.D. students.

The post Vegebot robot applies machine learning to harvest lettuce appeared first on The Robot Report.

Researchers back Tesla’s non-LiDAR approach to self-driving cars


 

If you haven’t heard, Tesla CEO Elon Musk is not a LiDAR fan. Most companies working on autonomous vehicles – including Ford, GM Cruise, Uber and Waymo – think LiDAR is an essential part of the sensor suite. But not Tesla. Its vehicles don’t have LiDAR and rely on radar, GPS, maps and other cameras and sensors.

“LiDAR is a fool’s errand,” Musk said at Tesla’s recent Autonomy Day. “Anyone relying on LiDAR is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.”

“LiDAR is lame,” Musk added. “They’re gonna dump LiDAR, mark my words. That’s my prediction.”

While not as anti-LiDAR as Musk, it appears researchers at Cornell University agree with his LiDAR-less approach. Using two inexpensive cameras on either side of a vehicle’s windshield, Cornell researchers have discovered they can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost.

The researchers found that analyzing the captured images from a bird’s-eye view, rather than the more traditional frontal view, more than tripled their accuracy, making stereo camera a viable and low-cost alternative to LiDAR.

Tesla’s Sr. Director of AI Andrej Karpathy outlined a nearly identical strategy during Autonomy Day.

“The common belief is that you couldn’t make self-driving cars without LiDARs,” said Kilian Weinberger, associate professor of computer science at Cornell and senior author of the paper Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving. “We’ve shown, at least in principle, that it’s possible.”

LiDAR uses lasers to create 3D point maps of their surroundings, measuring objects’ distance via the speed of light. Stereo cameras rely on two perspectives to establish depth. But critics say their accuracy in object detection is too low. However, the Cornell researchers are saying the date they captured from stereo cameras was nearly as precise as LiDAR. The gap in accuracy emerged when the stereo cameras’ data was being analyzed, they say.

“When you have camera images, it’s so, so, so tempting to look at the frontal view, because that’s what the camera sees,” Weinberger says. “But there also lies the problem, because if you see objects from the front then the way they’re processed actually deforms them, and you blur objects into the background and deform their shapes.”

Cornell researchers compare AVOD with LiDAR, pseudo-LiDAR, and frontal-view (stereo). Ground- truth boxes are in red, predicted boxes in green; the observer in the pseudo-LiDAR plots (bottom row) is on the very left side looking to the right. The frontal-view approach (right) even miscalculates the depths of nearby objects and misses far-away objects entirely.

For most self-driving cars, the data captured by cameras or sensors is analyzed using convolutional neural networks (CNNs). The Cornell researchers say CNNs are very good at identifying objects in standard color photographs, but they can distort the 3D information if it’s represented from the front. Again, when Cornell researchers switched the representation from a frontal perspective to a bird’s-eye view, the accuracy more than tripled.

“There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information,” said co-author Bharath Hariharan, assistant professor of computer science. “Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented.”

“The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car,” said Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper. “The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry.”

Understand.ai accelerates image annotation for self-driving cars

Understand.AI accelerates image annotation for self-driving cars

Using processed images, algorithms learn to recognize the real environment for autonomous driving. Source: understand.ai

Autonomous cars must perceive their environment accurately to move safely. The corresponding algorithms are trained using a large number of image and video recordings. Single image elements, such as a tree, a pedestrian, or a road sign must be labeled for the algorithm to recognize them. Understand.ai is working to improve and accelerate this labeling.

Understand.ai was founded in 2017 by computer scientist Philip Kessler, who studied at the Karlsruhe Institute of Technology (KIT), and Marc Mengler.

“An algorithm learns by examples, and the more examples exist, the better it learns,” stated Kessler. For this reason, the automotive industry needs a lot of video and image data to train machine learning for autonomous driving. So far, most of the objects in these images have been labeled manually by human staffers.

“Big companies, such as Tesla, employ thousands of workers in Nigeria or India for this purpose,” Kessler explained. “The process is troublesome and time-consuming.”

Accelerating training at understand.ai

“We at understand.ai use artificial intelligence to make labeling up to 10 times quicker and more precise,” he added. Although image processing is highly automated, final quality control is done by humans. Kessler noted that the “combination of technology and human care is particularly important for safety-critical activities, such as autonomous driving.”

The labelings, also called annotations, in the image and video files have to agree with the real environment with pixel-level accuracy. The better the quality of the processed image data, the better is the algorithm that uses this data for training.

“As training images cannot be supplied for all situations, such as accidents, we now also offer simulations based on real data,” Kessler said.

Although understand.ai focuses on autonomous driving, it also plans to process image data for training algorithms to detect tumors or to evaluate aerial photos in the future. Leading car manufacturers and suppliers in Germany and the U.S. are among the startup’s clients.

The startup’s main office is in Karlsruhe, Germany, and some of its more than 50 employees work at offices in Berlin and San Francisco. Last year, understand.ai received $2.8 million (U.S.) in funding from a group of private investors.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Building interest in startups and partnerships

In 2012, Kessler started to study informatics at KIT, where he became interested in AI and autonomous driving when developing an autonomous model car in the KITCar students group. Kessler said his one-year tenure at Mercedes Research in Silicon Valley, where he focused on machine learning and data analysis, was “highly motivating” for establishing his own business.

“Nowhere else can you learn more within a shortest period of time than in a startup,” said Kessler, who is 26 years old. “Recently, the interest of big companies in cooperating with startups increased considerably.”

He said he thinks that Germany sleepwalked through the first wave of AI, in which it was used mainly in entertainment devices and consumer products.

“In the second wave, in which artificial intelligence is applied in industry and technology, Germany will be able to use its potential,” Kessler claimed.

Stanford AI Camera Offers Faster, More Efficient Image Classification

The image recognition technology that underlies today’s autonomous cars and aerial drones depends on artificial intelligence: the computers essentially teach themselves to recognize objects like a dog, a pedestrian crossing the street or a stopped car. The problem is that the computers running the artificial intelligence algorithms are currently too large and slow for future…

The post Stanford AI Camera Offers Faster, More Efficient Image Classification appeared first on The Robot Report.

Prophesee introduces Onboard reference system for event-based machine vision in IIoT applications

Prophesee SA (formerly Chronocam) now sells its first commercial implementation of its event-based vision technology for machines. The Onboard reference system could help developers of vision-enabled industrial automation systems such as robots, inspection equipment, and monitoring and surveillance devices. It features a Prophesee-enabled VGA-resolution camera combined with a Qualcomm Snapdragon processor and can be quickly…

The post Prophesee introduces Onboard reference system for event-based machine vision in IIoT applications appeared first on The Robot Report.

Tips for choosing a 3D vision system

With four times as many as color receptors as humans, the Mantis shrimp has the most impressive eyes in nature.  Manufacturers have long relied on human vision for complex picking and assembly processes, but 3D vision systems are beginning to replicate the capability of human vision in robotics. Here, Nigel Smith, managing director of Toshiba…

The post Tips for choosing a 3D vision system appeared first on The Robot Report.