Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems

SAN JOSE, Calif. — Velodyne Lidar Inc. today announced that it has acquired Mapper.ai’s mapping and localization software, as well as its intellectual property assets. Velodyne said that Mapper’s technology will enable it to accelerate development of the Vella software that establishes its directional view Velarray lidar sensor.

The Velarray is the first solid-state Velodyne lidar sensor that is embeddable and fits behind a windshield, said Velodyne, which described it as “an integral component for superior, more effective advanced driver assistance systems” (ADAS).

The company provides lidar sensors for autonomous vehicles and driver assistance. David Hall, Velodyne’s founder and CEO invented real-time surround-view lidar systems in 2005 as part of Velodyne Acoustics. His invention revolutionized perception and autonomy for automotive, new mobility, mapping, robotics, and security.

Velodyne said its high-performance product line includes a broad range of sensors, including the cost-effective Puck, the versatile Ultra Puck, and the autonomy-advancing Alpha Puck.

Mapper.ai staffers to join Velodyne

Mapper’s entire leadership and engineering teams will join Velodyne, bolstering the company’s large and growing software-development group. The talent from Mapper.ai will augment the current team of engineers working on Vella software, which will accelerate Velodyne’s production of ADAS systems.

Velodyne claimed its technology will allow customers to unlock advanced capabilities for ADAS features, including pedestrian and bicycle avoidance, Lane Keep Assistance (LKA), Automatic Emergency Braking (AEB), Adaptive Cruise Control (ACC), and Traffic Jam Assist (TJA).

“By adding Vella software to our broad portfolio of lidar technology, Velodyne is poised to revolutionize ADAS performance and safety,” stated Anand Gopalan, chief technology officer at Velodyne. “Expanding our team to develop Vella is a giant step towards achieving our goal of mass-producing an ADAS solution that dramatically improves roadway safety.”

“Mapper technology gives us access to some key algorithmic elements and accelerates our development timeline,” Gopalan added. “Together, our sensors and software will allow powerful lidar-based safety solutions to be available on every vehicle.”

Mapper.ai to contribute to Velodyne software

Mapper.ai developers will work on the Vella software for the Velarray sensor. Source: Velodyne Lidar

“Velodyne has both created the market for high-fidelity automotive lidar and established itself as the leader. We have been Velodyne customers for years and have already integrated their lidar sensors into easily deployable solutions for scalable high-definition mapping,” said Dr. Nikhil Naikal, founder and CEO of Mapper, who is joining Velodyne. “We are excited to use our technology to speed up Velodyne’s lidar-centric software approach to ADAS.”

In addition to ADAS, Velodyne said it will incorporate Mapper technology into lidar-centric solutions for other emerging applications, including autonomous vehicles, last-mile delivery services, security, smart cities, smart agriculture, robotics, and unmanned aerial vehicles.

The post Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems appeared first on The Robot Report.

ASTM International proposes standards guide, center of excellence for exoskeletons

One of the barriers to more widespread development and adoption of exoskeletons for industrial, medical, and military use has been a lack of standards. ASTM International this month proposed a guide to provide standardized tools to assess and improve the usability and usefulness of exoskeletons and exosuits.

“Exoskeletons and exosuits can open up a world of possibilities, from helping workers perform industrial tasks while not getting overstressed, to helping stroke victims learning to walk again, to helping soldiers carry heavier rucksacks longer distances,” said Kevin Purcell, an ergonomist at the U.S. Army Public Health Center’s Aberdeen Proving Ground. “But if it doesn’t help you perform your task and/or it’s hard to use, it won’t get used.”

He added that the guide will incorporate ways to understand the attributes of exoskeletons, as well as observation methods and questionnaires to help assess an exoskeleton’s performance and safety.

“The biggest challenge in creating this standard is that exoskeletons change greatly depending on the task the exoskeleton is designed to help,” said Purcell. “For instance, an industrial exoskeleton is a totally different design from one used for medical rehabilitation. The proposed standard will need to cover all types and industries.”

According to Purcell, industrial, medical rehabilitation, and defense users will benefit most from the proposed standard, as will exoskeleton manufacturers and regulatory bodies.

The F48 committee of ASTM International, previously known as he American Society for Testing and Materials, was formed in 2017. It is currently working on the proposed exoskeleton and exosuit standard, WK68719. Six subcommittees include about 150 members, including startups, government agencies, and enterprises such as Boeing and BMW.

ASTM publishes first standards

In May, ASTM International published its first two standards documents, which are intended to provide consensus terminology (F3323) and set forth basic labeling and other informational requirements (F3358). The standards are available for purchase.

“Exoskeletons embody the technological promise of empowering humans to be all they can be,” said F48 committee member William Billotte, a physical scientist at the U.S. National Institute of Standards and Technology (NIST). “We want to make sure that labels and product information are clear, so that exoskeletons fit people properly, so that they function safely and effectively, and so that people can get the most from these innovative products.”

The committee is working on several proposed standards and welcomes more participation from members of the exoskeleton community. For example, Billotte noted that the committee seeks experts in cybersecurity due to the growing need to secure data, controls, and biometrics in many exoskeletons.

ASTM proposes standards guide, center of excellence for exoskeletons

An exoskeleton vest at a BMW plant in in Spartanburg, S.C. Source: BMW

Call for an exoskeleton center of excellence

Last month, ASTM International called for proposals for an “Exo Technologies Center of Excellence.” The winner would receive up to $250,000 per year for up to five years. Full proposals are due today, and the winner will be announced in September, said ASTM.

“Now is the right time to create a hub of collaboration among startups, companies, and other entities that are exploring how exoskeletons could support factory workers, patients, the military, and many other people,” stated ASTM International President Katharine Morgan. “We look forward to this new center serving as a catalyst for game-changing R&D, standardization, related training, partnerships, and other efforts that help the world benefit from this exciting new technology.”

The center of excellence is intended to fill knowledge gaps, provide a global hub for education and a neutral forum to discuss common challenges, and provide a library of community resources. It should also coordinate global links among stakeholders, said ASTM.

West Conshohocken, Pa.-based ASTM International said it meets World Trade Organization (WTO) principles for developing international standards. The organization’s standards are used globally in research and development, product testing, quality systems, commercial transactions, and more.

The post ASTM International proposes standards guide, center of excellence for exoskeletons appeared first on The Robot Report.

Self-driving cars may not be best for older drivers, says Newcastle University study

Self-driving cars may not be best for older drivers, says Newcastle University study

VOICE member Ian Fairclough and study lead Dr. Shuo Li in test of older drivers. Source: Newcastle University

With more people living longer, driving is becoming increasingly important in later life, helping older drivers to stay independent, socially connected and mobile.

But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.

“In the U.K., older drivers are tending to drive more often and over longer distances, but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations,” explained Dr Shuo Li, an expert in intelligent transport systems at Newcastle University.

“Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we’ve seen older drivers take to minimize risk. But this can be quite limiting for people.”

Potential game-changer

Self-driving cars are seen as a potential game-changer for this age group, Li noted. Fully automated, they are unlikely to require a license and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.

But it’s not as clear-cut as it seems, said Li.

“There are several levels of automation, ranging from zero where the driver has complete control, through to Level 5, where the car is in charge,” he explained. “We’re some way-off Level 5, but Level 3 may be a trend just around the corner.  This will allow the driver to be completely disengaged — they can sit back and watch a film, eat, even talk on the phone.”

“But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds,” he added. “For younger people that switch between tasks is quite easy, but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor.”

Newcastle University DriveLAB tests older drivers

Led by Newcastle University’s Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take back control of an automated car in different scenarios and also the quality of their driving in these different situations.

Using the University’s state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).

They experienced automated driving for a short period and were then asked to “take back” control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.

The starting point in all situations was “total disengagement” — turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.

The time taken to regain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), “active input” such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).

“In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers,” said Li. “Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph, that means our older drivers would have needed an extra 35m warning distance — that’s equivalent to the length of 10 cars.

“But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident,” he said.

In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups.

In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.

Newcastle University DriveLab

VOICE member Pat Wilkinson. Source: Newcastle University

Designing automated cars of the future

The research team also explored older drivers’ opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.

Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what’s happening on the road and where they are even when they are busy with another activity.

The research team are now looking at how the vehicles can be improved to overcome some of these problems and better support older drivers when the automated cars hit our roads.

“I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” said Newcastle University Prof. Phil Blythe, who led the study and is chief scientific advisor for the U.K. Department for Transport. “The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.”

“Two pillars of the Government’s Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge,” he added. “Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties.”

“It is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” — Newcastle University Prof. Phil Blythe

Case studies of older drivers

Pat Wilkinson, who lives in Rowland’s Gill, County Durham, has been supporting the DriveLAB research for almost nine years.

Now 74, the former Magistrate said it’s interesting to see how technology is changing and gradually taking the control – and responsibility – away from the driver.

“I’m not really a fan of the cars you don’t have to drive,” she said. “As we get older, our reactions slow, but I think for the young ones, chatting on their phones or looking at the iPad, you just couldn’t react quickly if you needed to either. I think it’s an accident waiting to happen, whatever age you are.”

“And I enjoy driving – I think I’d miss that,” Wilkinson said. “I’ve driven since I first passed my test in my 20s, and I hope I can keep on doing so for a long time.

“I don’t think fully driverless cars will become the norm, but I do think the technology will take over more,” she said. “I think studies like this that help to make it as safe as possible are really important.”

Ian Fairclough, 77 from Gateshead, added: “When you’re older and the body starts to give up on you, a car means you can still have adventures and keep yourself active.”

“I passed my test at 22 and was in the army for 25 years, driving all sorts of vehicles in all terrains and climates,” he recalled. “Now I avoid bad weather, early mornings when the roads are busy and late at night when it’s dark, so it was really interesting to take part in this study and see how the technology is developing and what cars might be like a few years from now.”

Fairclough took part in two of the studies in the VR simulator and said it was difficult to switch your attention quickly from one task to another.

“It feels very strange to be a passenger one minute and the driver the next,” he said. “But I do like my Toyota Yaris. It’s simple, clear and practical.  I think perhaps you can have too many buttons.”

Wilkinson and Fairclough became involved in the project through VOICE, a group of volunteers working together with researchers and businesses to identify the needs of older people and develop solutions for a healthier, longer life.

The post Self-driving cars may not be best for older drivers, says Newcastle University study appeared first on The Robot Report.

Understand.ai accelerates image annotation for self-driving cars

Understand.AI accelerates image annotation for self-driving cars

Using processed images, algorithms learn to recognize the real environment for autonomous driving. Source: understand.ai

Autonomous cars must perceive their environment accurately to move safely. The corresponding algorithms are trained using a large number of image and video recordings. Single image elements, such as a tree, a pedestrian, or a road sign must be labeled for the algorithm to recognize them. Understand.ai is working to improve and accelerate this labeling.

Understand.ai was founded in 2017 by computer scientist Philip Kessler, who studied at the Karlsruhe Institute of Technology (KIT), and Marc Mengler.

“An algorithm learns by examples, and the more examples exist, the better it learns,” stated Kessler. For this reason, the automotive industry needs a lot of video and image data to train machine learning for autonomous driving. So far, most of the objects in these images have been labeled manually by human staffers.

“Big companies, such as Tesla, employ thousands of workers in Nigeria or India for this purpose,” Kessler explained. “The process is troublesome and time-consuming.”

Accelerating training at understand.ai

“We at understand.ai use artificial intelligence to make labeling up to 10 times quicker and more precise,” he added. Although image processing is highly automated, final quality control is done by humans. Kessler noted that the “combination of technology and human care is particularly important for safety-critical activities, such as autonomous driving.”

The labelings, also called annotations, in the image and video files have to agree with the real environment with pixel-level accuracy. The better the quality of the processed image data, the better is the algorithm that uses this data for training.

“As training images cannot be supplied for all situations, such as accidents, we now also offer simulations based on real data,” Kessler said.

Although understand.ai focuses on autonomous driving, it also plans to process image data for training algorithms to detect tumors or to evaluate aerial photos in the future. Leading car manufacturers and suppliers in Germany and the U.S. are among the startup’s clients.

The startup’s main office is in Karlsruhe, Germany, and some of its more than 50 employees work at offices in Berlin and San Francisco. Last year, understand.ai received $2.8 million (U.S.) in funding from a group of private investors.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Building interest in startups and partnerships

In 2012, Kessler started to study informatics at KIT, where he became interested in AI and autonomous driving when developing an autonomous model car in the KITCar students group. Kessler said his one-year tenure at Mercedes Research in Silicon Valley, where he focused on machine learning and data analysis, was “highly motivating” for establishing his own business.

“Nowhere else can you learn more within a shortest period of time than in a startup,” said Kessler, who is 26 years old. “Recently, the interest of big companies in cooperating with startups increased considerably.”

He said he thinks that Germany sleepwalked through the first wave of AI, in which it was used mainly in entertainment devices and consumer products.

“In the second wave, in which artificial intelligence is applied in industry and technology, Germany will be able to use its potential,” Kessler claimed.

US robot shipments break record despite automotive decline



Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register


Companies in North America installed robots at a record pace in 2018, according the the Robotic Industries Association (RIA). The 35,880 units shipped in 2018 marked a 7 percent increase from 2017. And is the United States, specifically, robot shipments increased 15% to a record 28,478.

Most areas showed growth in robot shipments, including food and consumer goods (48%), plastics and rubber (37%), life sciences (31%), and electronics (22%). Overall robot shipments to non-automotive companies increased 41 percent to a record 16,702 units.

The only sector analyzed by the RIA that slowed was automotive, which dropped 12 percent to 19,178 units shipped in North America from the 21,732 units shipped in 2017. The automotive industry still accounted for 53 percent of total robot shipments in North America in 2018. But that is the lowest percentage share since 2010.

And many signs are pointing to a downshift in automotive sales, so automotive could continue to see a decline in overall robot shipments. After three straights years of growth – a record 17.6 million units were sold in 2016 – sales of new vehicles in the US have started to slide. The US is the second-largest automotive market in the world.

China, the world’s largest car market, saw annual automotive sales dip for the first time in nearly two decades in 2018. Auto sales in China fell 3 percent in 2018, but analysts don’t see things improving going forward. Automakers sold about 28 million vehicles in China in 2018.

Source: Axios

“While the automotive industry has always led the way in implementing robotics here in North America, we are quite pleased to see other industries continuing to realize the benefits of automation,” Jeff Burnstein, President of the Association for Advancing Automation (A3) said. “And as we’ve heard from our members and at shows such as Automate, these sales and shipments aren’t just to large, multinational companies anymore. Small and medium-sized companies are using robots to solve real-world challenges, which is helping them be more competitive on a global scale.”

While the US installed more robots than ever, it still has a ways to go it in terms of being the most automated country in the world. That title belongs to the Republic of Korea, according to the latest robot density numbers from the International Federation of Robotics (IFR) that measures the number of robots per 10,000 workers in an industry. The Republic of Korea, also known as South Korea, had a robot density of 631, which was eight-times more than the global average.

The US ranked seventh, according to the IFR’s analysis, with a robot density of 189 in 2016. The main drivers of growth were the “Made in the US” and re-shoring initiatives.

robot density

Countries with the most automation in manufacturing industries. (Credit: International Federation of Robotics)

The post US robot shipments break record despite automotive decline appeared first on The Robot Report.

10 Most Automated Countries in the World

Robot density is a measurement that tracks the number of robots per 10,000 workers in an industry. According to the International Federation of Robotics (IFR), robot density in manufacturing industries increased worldwide in 2016. This shows more countries are turning to automation to fill their manufacturing needs. The average global robot density in 2016 was…

The post 10 Most Automated Countries in the World appeared first on The Robot Report.