U.S. Robotics Roadmap calls for white papers for revision

U.S. Robotics Roadmap calls for white papers for revision

The U.S. National Robotics Roadmap was first created 10 years ago. Since then, government agencies, universities, and companies have used it as a reference for where robotics is going. The first roadmap was published in 2009 and then revised in 2013 and 2016. The objective is to publish the fourth version of the roadmap by summer 2020.

The team developing the U.S. National Robotics Roadmap has put out a call to engage about 150 to 200 people from academia and industry to ensure that it is representative of the robotics community’s view of the future. The roadmap will cover manufacturing, service, medical, first-responder, and space robotics.

The revised roadmap will also include considerations related to ethics and workforce. It will cover emerging applications, the key challenges to progress, and what research and development is needed.

Join community workshops

Three one-and-a-half-day workshops will be organized for community input to the roadmap. The workshops will take place as follows:

  • Sept. 11-12 in Chicago (organized by Nancy Amato, co-director of the Parasol Lab at Texas A&M University and head of the Department of Computer Science at the University of Ilinois at Urbana-Champaign)
  • Oct. 17-18 in Los Angeles (organized by Maja Mataric, Chan Soon-Shiong distinguished professor of computer science, neuroscience, and pediatrics at the University of Southern California)
  • Nov. 15-16 in Lowell, Mass. (organized by Holly Yanco, director of the NERVE Center at the University of Massachusetts Lowell)

Participation in these workshops will be by invitation only. To participate, please submit a white paper/position statement of a maximum length of 1.5 pages. What are key use cases for robotics in a five-to-10-year perspective, what are key limitations, and what R&D is needed in that time frame? The white paper can address all three aspects or focus on one of them. The white paper must include the following information:

  • Name, affiliation, and e-mail address
  • A position statement (1.5 pages max)

Please submit the white paper as regular text or as a PDF file. Statements that are too long will be ignored. Position papers that only focus on current research are not appropriate. A white paper should present a future vision and not merely discuss state of the art.

White papers should be submitted by end of the day Aug. 15, 2019, to roadmapping@robotics-vo.org. Late submissions may not be considered. We will evaluate submitted white papers by Aug. 18 and select people for the workshops by Aug. 19.

Roadmap revision timeline

The workshop reports will be used as the basis for a synthesis of a new roadmap. The nominal timeline is:

  • August 2019: Call for white papers
  • September – November 2019: Workshops
  • December 2019: Workshops reports finalized
  • January 2020: Synthesis meeting at UC San Diego
  • February 2020: Publish draft roadmap for community feedback
  • April 2020: Revision of roadmap based on community feedback
  • May 2020: Finalize roadmap with graphics design
  • July 2020: Publish roadmap

If you have any questions about the process, the scope, etc., please send e-mail to Henrik I Christensen at hichristensen@eng.ucsd.edu.

U.S. Robotics Roadmap calls for reviewers

Henrik I Christensen spoke at the Robotics Summit & Expo in Boston.

Editor’s note: Christensen, Qualcomm Chancellor’s Chair of Robot Systems at the University of California San Diego and co-founder of Robust AI, delivered a keynote address at last month’s Robotics Summit & Expo, produced by The Robot Report.

The post U.S. Robotics Roadmap calls for white papers for revision appeared first on The Robot Report.

RaaS and AI help retail supply chains adopt and manage robotics, says Kindred VP

Unlike industrial automation, which has been affected by a decline in automotive sales worldwide, robots for e-commerce order fulfillment continue to face strong demand. Warehouses, third-part logistics providers, and grocers are turning to robots because of competitive pressures, labor scarcities, and consumer expectations of rapid delivery. However, robotics developers and suppliers must distinguish themselves in a crowded market. The Robotics-as-a-Service, or RaaS, model is one way to serve retail supply chain needs, said Kindred Inc.

By 2025, there will be more than 4 million robots in operation at 50,000 warehouses around the world, predicted ABI Research. It cited improvements in computer vision, artificial intelligence, and deep learning.

“Economically viable mobile manipulation robots from the likes of RightHand Robotics and Kindred Systems are now enabling a wider variety of individual items to be automatically picked and placed within a fulfillment operation,” said ABI Research. “By combining mobile robots, picking robots, and even autonomous forklifts, fulfillment centers can achieve greater levels of automation in an efficient and cost-effective way.”

“Many robot technology vendors are providing additional value by offering flexible pricing options,” stated the research firm. “Robotics-as-a-Service models mean that large CapEx costs can be replaced with more accessible OpEx costs that are directly proportional to the consumption of technologies or services, improving the affordability of robotics systems among the midmarket, further driving adoption.”

The Robot Report spoke with Victor Anjos, who recently joined San Francisco-based Kindred as vice president of engineering, about how AI and RaaS can help the logistics industry.

Kindred applies AI to sortation

Can you briefly describe Kindred’s offerings?

Anjos: Sure. Kindred makes AI-enhanced, autonomous, piece-picking robots. Today, they’re optimized to perform the piece-picking process in a fulfillment center, for example, in a facility that fills individual e-commerce orders.

It’s important to understand our solution is more than a shiny robotic arm. Besides the part you can see  — the robotic arm — our solution includes an AI platform to enable autonomous learning and in-motion planning, plus the latest in robotic technology, backed by our integration and support services.

The Robot Report visited Kindred at Automate/ProMat 2019 — what’s new since then?

Anjos: Since then, we’ve been hard at work on a new gripper optimized to handle rigid items like shampoo bottles and small cartons. We’ve got a ton of new AI models in development, and we continue to tune SORT’s performance using reinforcement learning.

What should engineers at user companies know about AutoGrasp and SORT?

Anjos: AutoGrasp is the unique combination of technologies behind SORT. There’s the AI-powered vision, grasping, and manipulation technology that allows the robot to quickly and accurately sort batches into discrete orders.

Then there’s the robotic device itself, which has been engineered for speed, agility and a wide range of motion. And finally, we offer WMS [warehouse management system] integration, process design, and deployment services, as well as ongoing maintenance and support, of course.

What use cases are better for collaborative robots or cobots versus standard industrial arms?

Anjos: Kindred’s solution is more than a robotic arm. It’s equipped with AI-enhanced computer vision, so it can work effectively in the dynamic conditions that we often find in a fulfillment environment. It responds to what it senses in real time and can even configure itself on the fly by changing the suction grip attachment while in motion.

The bottom line is, any solution that works for several different use cases is the result of compromises. That’s the nature of any multi-purpose device. We chose to optimize SORT for a specific step in the fulfillment process. That’s how we’re able to give it the ability to grasp, manipulate and place items with human-like accuracy — but with machine-like consistency and stamina.

And, like the people our robot works alongside of, SORT can learn on the job. Not only from its own experience, but based on the combined experience of other robots on the network as well.

RaaS can aid robotics adoption

RaaS Kindred Victor Anjos

Victor Anjos, VP of engineering, Kindred

Have you always offered both the AI and robotics elements of your products through an RaaS model?

Anjos: Yes, we have. Both are included in RaaS, and it has been an important part of our model.

Can you give an example of how RaaS works during implementation and then for ongoing support? What sorts of issues can arise?

Anjos: With our RaaS model, the assets are owned and maintained by Kindred, while the customer pays for the picking service as needed. Implementing RaaS eliminates the customer’s upfront capital expense.

Of course, the customer still needs to allocate operational and IT resources to make the RaaS implementation a success.

Is RaaS evolving or becoming more widespread and understood? Are there still pockets of supply chains that aren’t familiar with leasing models?

Anjos: RaaS is a relatively new concept for the supply chain industry, but it’s attracting a lot of attention. The financial model aligns with their operating budgets. And customers have an ability to scale the use of robots to meet peak demand, increasing asset utilization throughout the year.

Are there situations where it’s better to develop robots in-house or buy them outright than to use RaaS?

Anjos: Every customer I’ve spoken with has their hands full managing fulfillment operations. They’re not very eager to hire a team of AI developers to build a fleet of robots and hire engineers to maintain them! And Kindred isn’t interested in selling apparel, so it all works out!

What issues can arise during a RaaS relationship, and how much should providers and clients collaborate?

Anjos: Every supply chain system implementation is unique. During implementation, Kindred’s customer-success team works with our customer to understand performance requirements, integrate Kindred robots into their existing warehouse processes and systems, and provide onsite and remote support to ensure the success of each implementation.

Do you see RaaS spreading from order fulfillment to retail stores? What else would you like to see?

Anjos: That’s very possible. Robot use is increasing across the entire retail industry, and the RaaS model certainly makes adoption of this technology even easier and more beneficial.

For example, I can see how some of the robotic technologies developed for traditional fulfillment centers could be used in an urban or micro-fulfillment centers scenario.

The post RaaS and AI help retail supply chains adopt and manage robotics, says Kindred VP appeared first on The Robot Report.

Vegebot robot applies machine learning to harvest lettuce

Vegebot, a vegetable-picking robot, uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop.

A team at the University of Cambridge initially trained Vegebot to recognize and harvest iceberg lettuce in the laboratory. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, a local fruit and vegetable co-operative.

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The researchers published their results in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the U.K., iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot.” — Josie Hughes, University of Cambridge report co-author

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.

The Vegebot first identifies the “target” crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested. Finally, it cuts the lettuce from the rest of the plant without crushing it so that it is “supermarket ready.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

Vegebot designed for lettuce-picking challenge

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image. Then for each lettuce, the robot classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

Vegebot in the field

Vegebot uses machine vision to identify heads of iceberg lettuce. Credit: University of Cambridge

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognize healthy lettuce in the lab, the team then trained it in the field, in a variety of weather conditions, on thousands of real lettuce heads.

A second camera on the Vegebot is positioned near the cutting blade, and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In the future, robotic harvesters could help address problems with labor shortages in agriculture. They could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded.

However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6 million ($8.26 million U.S.) for the new CDT, which will support at least 50 Ph.D. students.

The post Vegebot robot applies machine learning to harvest lettuce appeared first on The Robot Report.

AMP Robotics announces largest deployment of AI-guided recycling robots

AMP Robotics announces largest deployment of AI-guided recycling robots

AMP robotics deployment at SSR in Florida. Source: Business Wire

DENVER — AMP Robotics Corp., a pioneer in artificial intelligence and robotics for the recycling industry, today announced the further expansion of AI guided robots for recycling municipal solid waste at Single Stream Recyclers LLC. This follows Single Stream Recyclers’ recent unveiling of its first installation of AMP systems at its state-of-the-art material recovery facility in Florida, the first of its kind in the state.

Single Stream Recyclers (SSR) currently operates six AMP Cortex single-robot systems at its 100,000 square-foot facility in Sarasota. The latest deployment will add another four AMP Cortex dual-robot systems (DRS), bringing the total deployment to 14 robots. The AMP Cortex DRS uses two high-speed precision robots that sort, pick, and place materials. The robots are installed on a number of different sorting lines throughout the facility and will process plastics, cartons, paper, cardboard, metals, and other materials.

“Robots are the future of the recycling industry,” said John Hansen co-owner of SSR. “Our investment with AMP is vital to our goal of creating the most efficient recycling operation possible, while producing the highest value commodities for resale.”

“AMP’s robots are highly reliable and can consistently pick 70-80 items a minute as needed, twice as fast as humanly possible and with greater accuracy,” added Eric Konik co-owner of SSR. “This will help us lower cost, remove contamination, increase the purity of our commodity bales, divert waste from the landfill, and increase overall recycling rates.”

AMP Neuron AI guides materials sorting

The AMP Cortex robots are guided by the AMP Neuron AI platform to perform tasks. AMP Neuron applies computer vision and machine learning to recognize different colors, textures, shapes, sizes, and patterns to identify material characteristics.

Exact down to what brand a package is, the system transforms millions of images into data, directing the robots to pick and place targeted material for recycling. The AI platform digitizes the material stream, capturing data on what goes in and out, so informed decisions can be made about operations.

“SSR has built a world-class facility that sets the bar for modern recycling. John, Eric and their team are at the forefront of their industry and we are grateful to be a part of their plans,” said Matanya Horowitz, CEO of AMP Robotics. “SSR represents the most comprehensive application of AI and robotics in the recycling industry, a major milestone not only for us, but for the advancement of the circular economy.”

The new systems will be installed this summer. Upon completion, AMP’s installation at SSR is believed to be the single largest application of AI guided robots for recycling in the United States and likely the world. In addition to Florida, AMP has installations at numerous facilities across the country including California, Colorado, Indiana, Minnesota, and Wisconsin; with many more planned. Earlier this spring, AMP expanded globally by partnering with Ryohshin Ltd. to bring robotic recycling to Japan.

About AMP Robotics

AMP Robotics is transforming the economics of recycling with AI-guided robots. The company’s high-performance industrial robotics system, AMP Cortex, precisely automates the identification, sorting, and processing of material streams to extract maximum value for businesses that recycle municipal solid waste, e-waste and construction and demolition.

The AMP Neuron AI platform operates AMP Cortex using advanced computer vision and machine learning to continuously train itself by processing millions of material images within an ever-expanding neural network that experientially adapts to changes in a facility’s material stream.

About Single Stream Recyclers

Single Stream Recyclers is a materials recovery facility in Sarasota, Fla. It processes, materials from all over the west coast of Florida. The facility sorts, bales and ships aluminum, cardboard, food and beverage cartons, glass, paper, plastics, metal and other recyclables from residential curbside and commercial recycling collection. SSR is heavily invested in technology to help create the best possible end products and reduce contamination as well as residue.

Rutgers develops system to optimize automated packing


Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.

“We can achieve low-cost, automated solutions that are easily deployable. The key is to make minimal but effective hardware choices and focus on robust algorithms and software,” said the study’s senior author Kostas Bekris, an associate professor in the Department of Computer Science in the School of Arts and Sciences at Rutgers University-New Brunswick.

Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.

The scientists’ peer-reviewed study (PDF) was published recently at the IEEE International Conference on Robotics and Automation, where it was a finalist for the Best Paper Award in Automation. The study coincides with the growing trend of deploying robots to perform logistics, retail and warehouse tasks. Advances in robotics are accelerating at an unprecedented pace due to machine learning algorithms that allow for continuous experiments.

The video above shows a Kuka LBR iiwa robotic arm tightly packing objects from a bin into a shipping order box (five times actual speed). The researchers used two Intel RealSense SR300 depth-sensing cameras.

Pipeline in terms of control, data flow (green lines) and failure handling (red lines). The blocks identify the modules of the system. Click image to enlarge. | Credit: Rutgers University

Tightly packing products picked from an unorganized pile remains largely a manual task, even though it is critical to warehouse efficiency. Automating such tasks is important for companies’ competitiveness and allows people to focus on less menial and physically taxing work, according to the Rutgers scientific team.

The Rutgers study focused on placing objects from a bin into a small shipping box and tightly arranging them. This is a more difficult task for a robot compared with just picking up an object and dropping it into a box.

The researchers developed software and algorithms for their robotic arm. They used visual data and a simple suction cup, which doubles as a finger for pushing objects. The resulting system can topple objects to get a desirable surface for grabbing them. Furthermore, it uses sensor data to pull objects toward a targeted area and push objects together. During these operations, it uses real-time monitoring to detect and avoid potential failures.

Since the study focused on packing cube-shaped objects, a next step would be to explore packing objects of different shapes and sizes. Another step would be to explore automatic learning by the robotic system after it’s given a specific task.

Editor’s Note: This article was republished with permission from Rutgers University.

Brain Corp Europe opens in Amsterdam


A BrainOS-powered autonomous floor scrubber. | Credit: Brain Corp

San Diego-based Brain Corp, the Softbank-backed developer of autonomous navigation systems, has opened its European headquarters in Amsterdam. The reason for the expansion is two-fold: it helps Brain better support partners who do business in Europe, and it helps Brain find additional engineering talent.

“Amsterdam is a fantastic gateway to Europe and has one of the largest airports in Europe,” Sandy Agnos, Brain’s Director of Global Business Development, told The Robot Report. “It’s very business and tech friendly. It is the second-fastest-growing tech community, talent-wise, in Europe.”

Brain hired Michel Spruijt to lead Brain Corp Europe. He will be tasked with driving sales of BrainOS-powered machines, providing partner support, and overseeing general operations throughout Europe. Agnos said Brain was impressed by Spruijt’s previous experience growing an office from “a few employees to over 100 was impressive to us.”

“Under Michel Spruijt’s guidance, our vision of a world where the lives of people are made safer, easier, more productive, and more fulfilling with the help of robots will extend into Europe,” said Eugene Izhikevich, Brain Corp’s Co-Founder and CEO.

Agnos said there will initially be about 12 employees at Brain Corp Europe who focus mostly on service and support. She added that Brain is recruiting software engineering talent and will continue to grow the Amsterdam office.

A rendering of how BrainOS-powered machines sense their environment. | Credit: Brain Corp

Brain planning worldwide expansion

The European headquarters marks the second international office in Brain’s global expansion. The company opened an office in Tokyo in 2017. This made sense for a couple of reasons. Japanese tech giant Softbank led Brain’s $114 million funding round in mid-2017 via the Softbank Vision Fund. And Softbank’s new autonomous floor cleaning robot, Whiz, uses Brain’s autonomous navigation stack.

Agnos said Brain is planning to add other regional offices after Amsterdam. The dates are in flux, but future expansion includes:

  • Further growth in Europe in 2020
  • Expansion in Asia Pacific, specifically Australia and Korea, in mid- to late-2020
  • South America afterwards

“We follow our partners’ needs,” said Agnos. “We are becoming a global company with support offices around the world. The hardest part is we can’t expand fast enough. Our OEM partners already have large, global customer bases. We need to have the right people and infrastructure in each location.”

BrainOS-powered robots

BrainOS, the company’s cloud-connected operating system, currently powers thousands of floor care robots across numerous environments. Brain recently partnered with Nilfisk, a Copenhagen, Denmark-based cleaning solutions provider that has been around for 110-plus years. Nilfisk is licensing the BrainOS platform for the production, deployment, and support of its robotic floor cleaners.

Walmart, the world’s largest retailer, has 360 BrainOS-powered machines cleaning its stores across the United States. A human needs to initially teach the BrainOS-powered machines the layout of the stores. But after that initial demo, BrainOS’ combination of off-the-shelf hardware, sensors, and software enable the floor scrubbers to navigate autonomously. Brain employs a collection of cameras, sensors and LiDAR to ensure safety and obstacle avoidance. All the robots are connected to a cloud-based reporting system that allows them to be monitored and managed.

At ProMat 2019, Brain debuted AutoDelivery, a proof-of-concept autonomous delivery robot designed for retail stores, warehouses, and factories. AutoDelivery, which can tow several cart types, boasts cameras, 4G LTE connectivity, and routing algorithms that allow it to learn its way around a store. AutoDelivery isn’t slated for commercial launch until early 2020.

Izhikevich recently told The Robot Report that Brain is exploring other types of mobile applications, including delivery, eldercare, security and more. In July 2018, Brain led a $13.4 million Series B for Savioke, which makes autonomous delivery robots. For years, Savioke built its autonomous navigation stack from scratch using ROS.

Understand.ai accelerates image annotation for self-driving cars

Understand.AI accelerates image annotation for self-driving cars

Using processed images, algorithms learn to recognize the real environment for autonomous driving. Source: understand.ai

Autonomous cars must perceive their environment accurately to move safely. The corresponding algorithms are trained using a large number of image and video recordings. Single image elements, such as a tree, a pedestrian, or a road sign must be labeled for the algorithm to recognize them. Understand.ai is working to improve and accelerate this labeling.

Understand.ai was founded in 2017 by computer scientist Philip Kessler, who studied at the Karlsruhe Institute of Technology (KIT), and Marc Mengler.

“An algorithm learns by examples, and the more examples exist, the better it learns,” stated Kessler. For this reason, the automotive industry needs a lot of video and image data to train machine learning for autonomous driving. So far, most of the objects in these images have been labeled manually by human staffers.

“Big companies, such as Tesla, employ thousands of workers in Nigeria or India for this purpose,” Kessler explained. “The process is troublesome and time-consuming.”

Accelerating training at understand.ai

“We at understand.ai use artificial intelligence to make labeling up to 10 times quicker and more precise,” he added. Although image processing is highly automated, final quality control is done by humans. Kessler noted that the “combination of technology and human care is particularly important for safety-critical activities, such as autonomous driving.”

The labelings, also called annotations, in the image and video files have to agree with the real environment with pixel-level accuracy. The better the quality of the processed image data, the better is the algorithm that uses this data for training.

“As training images cannot be supplied for all situations, such as accidents, we now also offer simulations based on real data,” Kessler said.

Although understand.ai focuses on autonomous driving, it also plans to process image data for training algorithms to detect tumors or to evaluate aerial photos in the future. Leading car manufacturers and suppliers in Germany and the U.S. are among the startup’s clients.

The startup’s main office is in Karlsruhe, Germany, and some of its more than 50 employees work at offices in Berlin and San Francisco. Last year, understand.ai received $2.8 million (U.S.) in funding from a group of private investors.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Building interest in startups and partnerships

In 2012, Kessler started to study informatics at KIT, where he became interested in AI and autonomous driving when developing an autonomous model car in the KITCar students group. Kessler said his one-year tenure at Mercedes Research in Silicon Valley, where he focused on machine learning and data analysis, was “highly motivating” for establishing his own business.

“Nowhere else can you learn more within a shortest period of time than in a startup,” said Kessler, who is 26 years old. “Recently, the interest of big companies in cooperating with startups increased considerably.”

He said he thinks that Germany sleepwalked through the first wave of AI, in which it was used mainly in entertainment devices and consumer products.

“In the second wave, in which artificial intelligence is applied in industry and technology, Germany will be able to use its potential,” Kessler claimed.

Neural network helps autonomous car learn to handle the unknown


Autonomous Vehicles

Shelley, Stanford’s autonomous Audi TTS, performs at Thunderhill Raceway Park. (Credit: Kurt Hickman)

Researchers at Stanford University have developed a new way of controlling autonomous cars that integrates prior driving experiences – a system that will help the cars perform more safely in extreme and unknown circumstances. Tested at the limits of friction on a racetrack using Niki, Stanford’s autonomous Volkswagen GTI, and Shelley, Stanford’s autonomous Audi TTS, the system performed about as well as an existing autonomous control system and an experienced racecar driver.

“Our work is motivated by safety, and we want autonomous vehicles to work in many scenarios, from normal driving on high-friction asphalt to fast, low-friction driving in ice and snow,” said Nathan Spielberg, a graduate student in mechanical engineering at Stanford and lead author of the paper about this research, published March 27 in Science Robotics. “We want our algorithms to be as good as the best skilled drivers—and, hopefully, better.”

While current autonomous cars might rely on in-the-moment evaluations of their environment, the control system these researchers designed incorporates data from recent maneuvers and past driving experiences – including trips Niki took around an icy test track near the Arctic Circle. Its ability to learn from the past could prove particularly powerful, given the abundance of autonomous car data researchers are producing in the process of developing these vehicles.

Physics and learning with a neural network

Control systems for autonomous cars need access to information about the available road-tire friction. This information dictates the limits of how hard the car can brake, accelerate and steer in order to stay on the road in critical emergency scenarios. If engineers want to safely push an autonomous car to its limits, such as having it plan an emergency maneuver on ice, they have to provide it with details, like the road-tire friction, in advance. This is difficult in the real world where friction is variable and often is difficult to predict.

To develop a more flexible, responsive control system, the researchers built a neural network that integrates data from past driving experiences at Thunderhill Raceway in Willows, California, and a winter test facility with foundational knowledge provided by 200,000 physics-based trajectories.

This video above shows the neural network controller implemented on an automated autonomous Volkswagen GTI tested at the limits of handling (the ability of a vehicle to maneuver a track or road without skidding out of control) at Thunderhill Raceway.

“With the techniques available today, you often have to choose between data-driven methods and approaches grounded in fundamental physics,” said J. Christian Gerdes, professor of mechanical engineering and senior author of the paper. “We think the path forward is to blend these approaches in order to harness their individual strengths. Physics can provide insight into structuring and validating neural network models that, in turn, can leverage massive amounts of data.”

The group ran comparison tests for their new system at Thunderhill Raceway. First, Shelley sped around controlled by the physics-based autonomous system, pre-loaded with set information about the course and conditions. When compared on the same course during 10 consecutive trials, Shelley and a skilled amateur driver generated comparable lap times. Then, the researchers loaded Niki with their new neural network system. The car performed similarly running both the learned and physics-based systems, even though the neural network lacked explicit information about road friction.

In simulated tests, the neural network system outperformed the physics-based system in both high-friction and low-friction scenarios. It did particularly well in scenarios that mixed those two conditions.

Simple feedforward-feedback control structure used for path tracking on an automated vehicle. (Credit: Stanford University)

An abundance of data

The results were encouraging, but the researchers stress that their neural network system does not perform well in conditions outside the ones it has experienced. They say as autonomous cars generate additional data to train their network, the cars should be able to handle a wider range of conditions.

“With so many self-driving cars on the roads and in development, there is an abundance of data being generated from all kinds of driving scenarios,” Spielberg said. “We wanted to build a neural network because there should be some way to make use of that data. If we can develop vehicles that have seen thousands of times more interactions than we have, we can hopefully make them safer.”

Editor’s Note: This article was republished from Stanford University.

The post Neural network helps autonomous car learn to handle the unknown appeared first on The Robot Report.

Brain code can now be copied for AI, robots, say researchers

Brain code can be copied for AI, robots, say researchers

KAST researchers exploring the human brain as a model for robots, from left: Ph.D. candidate Su Jin An, Dr. Jee Hang Lee, and Prof. Sang Wan Lee. Source: KAIST

Researchers at the Korea Advanced Institute of Science and Technology (KAIST), the University of Cambridge, Japan’s National Institute for Information and Communications Technology (NICT), and Google DeepMind have argued that our understanding of how humans make intelligent decisions has now reached a critical point. Robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses when we make decisions in our everyday lives, they said last week.

In our rapidly changing world, both humans and autonomous robots constantly need to learn and adapt to new environments. But the difference is that humans are capable of making decisions according to the unique situations, whereas robots still rely on predetermined data to make decisions.

Rapid progress has been made in strengthening the physical capability of robots. However, their central control systems, which govern how robots decide what to do at any one time, are still inferior to those of humans. In particular, they often rely on pre-programmed instructions to direct their behavior, and lack the hallmark of human behavior, that is, the flexibility and capacity to quickly learn and adapt.

Applying neuroscience to the robot brain

Applying neuroscience in robotics, Prof. Sang Wan Lee from the Department of Bio and Brain Engineering at KAIST and Prof. Ben Seymour from the University of Cambridge and NICT proposed a case in which robots should be designed based on the principles of the human brain. They argue that robot intelligence can be significantly enhanced by mimicking strategies that the human brain uses during decision-making processes in everyday life.

The problem with importing human-like intelligence into robots has always been a difficult task without knowing the computational principles for how the human brain makes decisions — in other words, how to translate brain activity into computer code for the robots’ “brains.”

Modeling robotic intelligence on the human brain

Brain-inspired solutions to robot learning. Neuroscientific views on various aspects of learning and cognition converge and create a new idea called “prefrontal metacontrol,” which can inspire researchers to design learning agents for key challenges in robotics such as performance-efficiency-speed, cooperation-competition, and exploration-exploitation trade-offs (Science Robotics)

However, researchers now argue that, following a series of recent discoveries in the field of computational neuroscience, there is enough of this code to effectively write it into robots. One of the examples discovered is the human brain’s “meta-controller.” It is a mechanism by which the brain decides how to switch between different subsystems to carry out complex tasks.

Another example is the human pain system, which allows them to protect themselves in potentially hazardous environments.

“Copying the brain’s code for these could greatly enhance the flexibility, efficiency, and safety of robots,” said Prof. Lee.

An interdisciplinary approach

The team argued that this inter-disciplinary approach will provide just as many benefits to neuroscience as to robotics. The recent explosion of interest in what lies behind psychiatric disorders such as anxiety, depression, and addiction has given rise to a set of sophisticated theories that are complex and difficult to test without some sort of advanced situation platform.

Modeling robotics on the human brain

Overview of neuroscience-robotics approach for decision-making. The figure details key areas for interdisciplinary study (Current Opinion in Behavioral Sciences)

“We need a way of modeling the human brain to find how it interacts with the world in real-life to test whether and how different abnormalities in these models give rise to certain disorders,” explained Prof. Seymour. “For instance, if we could reproduce anxiety behavior or obsessive-compulsive disorder in a robot, we could then predict what we need to do to treat it in humans.”

The team expects that producing robot models of different psychiatric disorders, in a similar way to how researchers use animal models now, will become a key future technology in clinical research.

Sympathy for the robot

The team also stated that there may also be other benefits to humans and intelligent robots learning, acting, and behaving in the same way. In future societies in which humans and robots live and work amongst each other, the ability to cooperate and empathize with robots might be much greater if we feel they think like us.

“We might think that having robots with the human traits of being a bit impulsive or overcautious would be a detriment, but these traits are an unavoidable by-product of human-like intelligence,” said Prof. Seymour. “And it turns out that this is helping us to understand human behavior as human.”

The framework for achieving this brain-inspired artificial intelligence was published in two journals, Science Robotics on Jan. 16 and Current Opinion in Behavioral Sciences on Feb. 6, 2019.

The post Brain code can now be copied for AI, robots, say researchers appeared first on The Robot Report.

Reinforcement learning, YouTube teaching robots new tricks

The sun may be setting on what David Letterman would call “Stupid Robot Tricks,” as intelligent machines are beginning to surpass humans in a wide variety of manual and intellectual pursuits. In March 2016, Google’s DeepMind software program AlphaGo defeated the reining Go champion, Lee Sedol. Go, a Chinese game that originated more than 3,000…

The post Reinforcement learning, YouTube teaching robots new tricks appeared first on The Robot Report.