Freedom Robotics raises seed funding for robotics dev tools, fleet controls

Freedom Robotics fleet management

RMS enables fleet management and troubleshooting. Source: Freedom Robotics

SAN FRANCISCO — Freedom Robotics Inc. today announced that it has closed a $6.6 million seed round. The company provides a cloud-based software development infrastructure for managing fleets of robots.

Freedom Robotics cited a study by the World Economic Forum stating that, by 2025, machines will perform more tasks than humans, creating 58 million jobs worldwide. The company plans to use its funding to build its team and technology.

Freedom Robotics claimed that robotics startups can get their products to market 10 times faster by using its tools to do the “undifferentiated heavy lifting” rather than devoting employees to developing a full software stack. The company said its platform-agnostic Robotics Management Software (RMS) provides the “building blocks” for prototyping, building, operating, and scaling robot fleets.

Freedom Robotics builds RMS for developers

“We’ve seen that robotics is hard,” observed Dimitri Onistsuk, co-founder of Freedom Robotics. “In sixth grade, I wrote a letter to myself saying that I would go to MIT, drop out, and found a company that would change the world.”

Onistsuk did go to MIT, drop out, and draw on his experiences with Hans Lee and Joshua Wilson, now chief technology officer and CEO, respectively, at Freedom Robotics.

“We had been building things together before there was a cloud,” recalled Onistsuk. “Now in robotics, very few people have the ability to build a full stack.”

“We see robotics developers who have wonderful applications, like caring for the elderly; transportation; or dull, dirty, and dangerous work,” he said. “Everyone agrees on the value of this area, but they don’t realize the complexity of day-to-day iteration, which requires many engineers and a lot of infrastructure for support.”

“Robotics is like the Web in 2002, where everyone who wants to make an attempt has to raise $10 million and get expert talent in things like computer vision, mechatronics, systems integration, and ROS,” Onistsuk told The Robot Report. “It costs a lot of money to try even once to get a product to market.”

“We’ve combined layers of distinct software services by bringing modern software-development techniques into robotics, which traditionally had a hardware focus,” he said. “You can use one or many — whatever you have to do to scale.”

‘AWS for robots’

Freedom Robotics said that its cloud-based tools can be installed with just a single line of code, and its real-time visualization tools combine robotics management and analysis capabilities that were previously scattered across systems.

“Developers are always trying to improve their processes and learn new things,” said Onistsuk. “Amazon Web Services allows you to bring up a computer with a single line of code. We spent most of the first six months as a company figuring out how to do that for robots. We even bought the domain name ’90 seconds to go.'”

“You can drop in one line of code and immediately see real-time telemetry and have a cloud link to a robot from anywhere in the world,” he said. “Normally, when you want to adopt new components and are just trying to build a robot where the components talk to one another, that can take months.”

“During one on-boarding call, a customer was able to see within two minutes real-time telemetry from robots,” Onistsuk said. “They had never seen sensor-log and live-streaming data together. They thought the video was stuttering, but then an engineer noticed an error in a robot running production software. The bug had already been pushed out to customers. They never had the tools before to see all data in one place in developer-friendly ways.”

“That is the experience we’re getting when building software alongside the people who build robots,” he said. “With faster feedback loops, companies can iterate 10 times faster and move developers to other projects.”

https://www.freedomrobotics.ai/careers

Freedom Robotics’ RMS combines robotics tools to help developers and robotics managers. Source: Freedom Robotics

The same tools for development, management

Onistsuk said that his and Lee’s experience led them to follow standard software-development practices. “Some truths are real — for your core infrastructure, you shouldn’t have to own computers — our software is cloud-based for that reason,” he said.

“We stand on the shoulders of giants and practice what we preach,” Onistsuk asserted. “Pieces of our underlying infrastructure run on standard clouds, and we follow standard ways of building them.”

He said that not only does Freedom Robotics offer standardized development tools; it also uses them to build its RMS.

“With a little thought, for anything that you want to do with our product, you have access to the API calls across the entire fleet,” said Onistsuk. “We used the same APIs to build the product as you would use to run it.”

Freedom Robotics resource monitoring

Resource monitoring with RMS. Source: Freedom Robotics

Investors and interoperability

Initialized Capital led the funding round, with participation from Toyota AI Ventures, Green Cow Venture Capital, Joe Montana’s Liquid 2 Ventures, S28 Capital partner Andrew Miklas, and James Lindenbaum. They joined existing investors Kevin Mahaffey, Justin Kan, Matt Brezina, Arianna Simpson, and Josh Buckley.

“We’ll soon reach a point when there are more robots than cell phones, and we’ll need the ‘Microsoft of robotics’ platform to power such a massive market,” said Garry Tan, managing partner at Initialized Capital, which has backed companies such as Instacart, Coinbase, and Cruise.

“Cloud learning will be a game-changer for robotics, allowing the experience of one robot to be ‘taught’ to the rest on the network. We’ve been looking for startups with the technology and market savvy to realize this cloud robotics future through fleet management, control, and analytics,” said Jim Adler, founding managing director at Toyota AI Ventures. “We were impressed with Freedom Robotics’ customer-first, comprehensive approach to managing and controlling fleets of robots and look forward to supporting the Freedom team as they make cloud robotics a market reality.”

“We found out about Toyota AI Ventures through its Twitter account,” said Onistsuk. “We got some referrals and went and met with them. As the founder of multiple companies, Jim [Adler] understood us in a way that industry-specific VCs couldn’t. He got our experience in robotics, building teams, and data analytics.”

What about competing robotics development platforms? “We realized from Day 1 that we shouldn’t be fighting,” Onistsuk replied. “We’re fully integrated with the cloud offerings of Amazon, Google, and Microsoft, as well as ROS. We have drop-in compatibility.”

“What we’re trying to power with that is allowing developers to build things that differentiate their products and services and win customers,” he added. “This is similar to our cloud-based strategy. We try to be hardware-agnostic. We want RMS to work out of the box with as many tools and pieces of hardware as possible so that people can try things rapidly.”

Freedom Robotics raises seed funding for robotic dev tools, fleet controls

The Freedom Robotics team has raised seed funding. Source: Freedom Robotics

Hardware gets commoditized

“Hardware is getting commoditized and driving market opportunity,” said Onistsuk. “For instance, desktop compute is only $100 — not just Raspberry Pi, but x86 — you can buy a real computer running a full operating system.”

“Sensors are getting cheaper thanks to phones, and 3D printing will affect actuators. NVIDIA is putting AI into a small, low-power form factor,” he added. “With cheaper components, we’re looking for $5,000 robot arms rather than $500,000 arms, and lots of delivery companies are looking to make a vehicle autonomous and operating at a price point that’s competitive.”

“Companies can use RMS to build their next robots as a service [RaaS], and we’ve worked with everything from the largest entertainment companies to sidewalk delivery startups and multibillion-dollar delivery companies,” Onistsuk said. “Freedom Robotics is about democratizing robotics development and removing barriers to entry so that two guys in a garage can scale out to a business because of demand. The dreams of people with real needs in robotics will cause the next wave of innovation.”

“Software infrastructure is hard to do — we take what many developers consider boring so that they can sell robots into businesses or the home that get better over time,” he said.

Freedom Robotics logo

‘Inspiring’ feedback

Customer feedback so far has been “overwhelmingly inspiring,” said Onistsuk. “The best moments are getting an e-mail from a customer saying, ‘We’re using your product, and we thought we didn’t want some login or alerting plug-in. We have a demo tomorrow, and it would take four months to build it, but you can do it.'”

“We’ve seen from our interactions that the latest generation of robotics developers has different expectations,” he said. “We’re seeing them ‘skating to where the puck is,’ iterating quickly to build tools and services around our roadmap.”

“The RMS is not just used by developers,” Onistsuk said. “Development, operations, and business teams can find and solve problems in a collaborative way with the visualization tool. We can support teams managing multiple robots with just a tablet, and it integrates with Slack.”

“We can go from high-level data down to CPU utilization,” Lee said. “With one click, you can get a replay of GPS and telemetry data and see every robot with an error. Each section is usually one engineer’s responsibility.”

“A lot of times, people develop robots for university research or an application, but how does the robot perform in the field when it’s in a ditch?” said Lee. “We can enable developers to make sure robots perform better and safer.”

Freedom Robotics is currently being used in industries including agriculture, manufacturing, logistics, and restaurants, among others.

“This is similar to getting dev done in minutes, not months, and it could speed up the entire robotics industry,” Onistsuk added. “Investors are just as excited about the team, scaling the business, and new customers as I am.”

The post Freedom Robotics raises seed funding for robotics dev tools, fleet controls appeared first on The Robot Report.

TRI tackles manipulation research for reliable, robust human-assist robots

Wouldn’t it be amazing to have a robot in your home that could work with you to put away the groceries, fold the laundry, cook your dinner, do the dishes, and tidy up before the guests come over? For some of us, a robot assistant – a teammate – might only be a convenience.

But for others, including our growing population of older people, applications like this could be the difference between living at home or in an assisted care facility. Done right, we believe these robots will amplify and augment human capabilities, allowing us to enjoy longer, healthier lives.

Decades of prognostications about the future – largely driven by science fiction novels and popular entertainment – have encouraged public expectations that someday home robots will happen. Companies have been trying for years to deliver on such forecasts and figure out how to safely introduce ever more capable robots into the unstructured home environment.

Despite this age of tremendous technological progress, the robots we see in homes to date are primarily vacuum cleaners and toys. Most people don’t realize how far today’s best robots are from being able to do basic household tasks. When they see heavy use of robot arms in factories or impressive videos on YouTube showing what a robot can do, they might reasonably expect these robots could be used in the home now.

Bringing robots into the home

Why haven’t home robots materialized as quickly as some have come to expect? One big challenge is reliability. Consider:

  • If you had a robot that could load dishes into the dishwasher for you, what if it broke a dish once a week?
  • Or, what if your child brings home a “No. 1 DAD!” mug that she painted at the local art studio, and after dinner, the robot discards that mug into the trash because it didn’t recognize it as an actual mug?

A major barrier for bringing robots into the home are core unsolved problems in manipulation that prevent reliability. As I presented this week at the Robotics: Science and Systems conference, the Toyota Research Institute (TRI) is working on fundamental issues in robot manipulation to tackle these unsolved reliability challenges. We have been pursuing a unique combination of robotics capabilities focused on dexterous tasks in an unstructured environment.

Unlike the sterile, controlled and programmable environment of the factory, the home is a “wild west” – unstructured and diverse. We cannot expect lab tests to account for every different object that a robot will see in your home. This challenge is sometimes referred to as “open-world manipulation,” as a callout to “open-world” computer games.

Despite recent strides in artificial intelligence and machine learning, it is still very hard to engineer a system that can deal with the complexity of a home environment and guarantee that it will (almost) always work correctly.

TRI addresses the reliability gap

Above is a demonstration video showing how TRI is exploring the challenge of robustness that addresses the reliability gap. We are using a robot loading dishes in a dishwasher as an example task. Our goal is not to design a robot that loads the dishwasher, but rather we use this task as a means to develop the tools and algorithms that can in turn be applied in many different applications.

Our focus is not on hardware, which is why we are using a factory robot arm in this demonstration rather than designing one that would be more appropriate for the home kitchen.

The robot in our demonstration uses stereo cameras mounted around the sink and deep learning algorithms to perceive objects in the sink. There are many robots out there today that can pick up almost any object — random object clutter clearing has become a standard benchmark robotics challenge. In clutter clearing, the robot doesn’t require much understanding about an object — perceiving the basic geometry is enough.

For example, the algorithm doesn’t need to recognize if the object is a plush toy, a toothbrush, or a coffee mug. Given this, these systems are also relatively limited with what they can do with those objects; for the most part, they can only pick up the objects and drop them in another location only. In the robotics world, we sometimes refer to these robots as “pick and drop.”

Loading the dishwasher is actually significantly harder than what most roboticists are currently demonstrating, and it requires considerably more understanding about the objects. Not only does the robot have to recognize a mug or a plate or “clutter,” but it has to also understand the shape, position, and orientation of each object in order to place it accurately in the dishwasher.

TRI’s work in progress shows not only that this is possible, but that it can be done with robustness that allows the robot to continuously operate for hours without disruption.

Toyota Research Institute

Getting a grasp on household tasks

Our manipulation robot has a relatively simple hand — a two-fingered gripper. The hand can make relatively simple grasps on a mug, but its ability to pick up a plate is more subtle. Plates are large and may be stacked, so we have to execute a complex “contact-rich” maneuver that slides one gripper finger under and between plates in order to get a firm hold. This is a simple example of the type of dexterity that humans achieve easily, but that we rarely see in robust robotics applications.

Silverware can also be tricky — it is small and shiny, which makes it hard to see with a machine-learning camera. Plus, given that the robot hand is relatively large compared to the smaller sink, the robot occasionally needs to stop and nudge the silverware to the center of the sink in order to do the pick. Our system can also detect if an object is not a mug, plate or silverware and, labeling it as “clutter,” and move it to a “discard” bin.

Connecting all of these pieces is a sophisticated task planner, which is constantly deciding what task the robot should execute next. This task planner decides if it should pull out the bottom drawer of the dishwasher to load some plates, pull out the middle drawer for mugs, or pull out the top drawer for silverware.’

Like the other components, we have made it resilient — if the drawer gets suddenly closed when it was needed to be open, the robot will stop, put down the object on the counter top, and pull the drawer back out to try again. This response shows how different this capability is than a typical precision, repetitive factory robot, which are typically isolated from human contact and environmental randomness.

Related content:

Simulation key to success

The cornerstone of TRI’s approach is the use of simulation. Simulation gives us a principled way to engineer and test systems of this complexity with incredible task diversity and machine learning and artificial intelligence components. It allows us to understand what level of performance the robot will have in your home with your mugs, even though we haven’t been able to test in your kitchen during our development.

An exciting achievement is that we have made great strides in making simulation robust enough to handle the visual and mechanical complexity of this dishwasher loading task and on closing the “sim to real” gap. We are now able to design and test in simulation and have confidence that the results will transfer to the real robot. At long last, we have reached a point where we do nearly all of our development in simulation, which has traditionally not been the case for robotic manipulation research.

We can run many more tests in simulation and more diverse tests. We are constantly generating random scenarios that will test the individual components of the dish loading plus the end-to-end performance.

Let me give you a simple example of how this works. Consider the task of extracting a single mug from the sink.  We generate scenarios where we place the mug in all sorts of random configurations, testing to find “corner cases” — rare situations where our perception algorithms or grasping algorithms might fail. We can vary material properties and lighting conditions. We even have algorithms for generating random, but reasonable, shapes of the mug, generating everything from a small espresso cup to a portly cylindrical coffee mug.

We conduct simulation testing through the night, and every morning we receive a report that gives us new failure cases that we need to address.

Early on, those failures were relatively easy to find, and easy to fix. Sometimes they are failures of the simulator — something happened in the simulator that could never have happened in the real world — and sometimes they are problems in our perception or grasping algorithms. We have to fix all of these failures.

TRI robot

TRI is using an industrial robot for household tasks to test its algorithms. Source: TRI

As we continue down this road to robustness, the failures are getting more rare and more subtle. The algorithms that we use to find those failures also need to get more advanced. The search space is so huge, and the performance of the system so nuanced, that finding the corner cases efficiently becomes our core research challenge.

Although we are exploring this problem in the kitchen sink, the core ideas and algorithms are motivated by, and are applicable to, related problems such as verifying automated driving technologies.

‘Repairing’ algorithms

The next piece of our work focuses on the development of algorithms to automatically “repair” the perception algorithm or controller whenever we find a new failure case. Because we are using simulation, we can test our changes against not only this newly discovered scenario, but also make sure that our changes also work for all of the other scenarios that we’ve discovered in the preceding tests.

Of course, it’s not enough to fix this one test. We have to make sure we also do not break all of the other tests that passed before. It’s possible to imagine a not-so-distant future where this repair can happen directly in your kitchen, whereby if one robot fails to handle your mug correctly, then all robots around the world learn from that mistake.

We are committed to achieving dexterity and reliability in open-world manipulation. Loading a dishwasher is just one example in a series of experiments we will be using at TRI to focus on this problem.

It’s a long journey, but ultimately it will produce capabilities that will bring more advanced robots into the home. When this happens, we hope that older adults will have the help they need to age in place with dignity, working with a robotic helper that will amplify their capabilities, while allowing more independence, longer.

Editor’s note: This post by Dr. Russ Tedrake, vice president of robotics research at TRI and a professor at the Massachusetts Institute of Technology, is republished with permission from the Toyota Research Institute.

Brain Corp Europe opens in Amsterdam


A BrainOS-powered autonomous floor scrubber. | Credit: Brain Corp

San Diego-based Brain Corp, the Softbank-backed developer of autonomous navigation systems, has opened its European headquarters in Amsterdam. The reason for the expansion is two-fold: it helps Brain better support partners who do business in Europe, and it helps Brain find additional engineering talent.

“Amsterdam is a fantastic gateway to Europe and has one of the largest airports in Europe,” Sandy Agnos, Brain’s Director of Global Business Development, told The Robot Report. “It’s very business and tech friendly. It is the second-fastest-growing tech community, talent-wise, in Europe.”

Brain hired Michel Spruijt to lead Brain Corp Europe. He will be tasked with driving sales of BrainOS-powered machines, providing partner support, and overseeing general operations throughout Europe. Agnos said Brain was impressed by Spruijt’s previous experience growing an office from “a few employees to over 100 was impressive to us.”

“Under Michel Spruijt’s guidance, our vision of a world where the lives of people are made safer, easier, more productive, and more fulfilling with the help of robots will extend into Europe,” said Eugene Izhikevich, Brain Corp’s Co-Founder and CEO.

Agnos said there will initially be about 12 employees at Brain Corp Europe who focus mostly on service and support. She added that Brain is recruiting software engineering talent and will continue to grow the Amsterdam office.

A rendering of how BrainOS-powered machines sense their environment. | Credit: Brain Corp

Brain planning worldwide expansion

The European headquarters marks the second international office in Brain’s global expansion. The company opened an office in Tokyo in 2017. This made sense for a couple of reasons. Japanese tech giant Softbank led Brain’s $114 million funding round in mid-2017 via the Softbank Vision Fund. And Softbank’s new autonomous floor cleaning robot, Whiz, uses Brain’s autonomous navigation stack.

Agnos said Brain is planning to add other regional offices after Amsterdam. The dates are in flux, but future expansion includes:

  • Further growth in Europe in 2020
  • Expansion in Asia Pacific, specifically Australia and Korea, in mid- to late-2020
  • South America afterwards

“We follow our partners’ needs,” said Agnos. “We are becoming a global company with support offices around the world. The hardest part is we can’t expand fast enough. Our OEM partners already have large, global customer bases. We need to have the right people and infrastructure in each location.”

BrainOS-powered robots

BrainOS, the company’s cloud-connected operating system, currently powers thousands of floor care robots across numerous environments. Brain recently partnered with Nilfisk, a Copenhagen, Denmark-based cleaning solutions provider that has been around for 110-plus years. Nilfisk is licensing the BrainOS platform for the production, deployment, and support of its robotic floor cleaners.

Walmart, the world’s largest retailer, has 360 BrainOS-powered machines cleaning its stores across the United States. A human needs to initially teach the BrainOS-powered machines the layout of the stores. But after that initial demo, BrainOS’ combination of off-the-shelf hardware, sensors, and software enable the floor scrubbers to navigate autonomously. Brain employs a collection of cameras, sensors and LiDAR to ensure safety and obstacle avoidance. All the robots are connected to a cloud-based reporting system that allows them to be monitored and managed.

At ProMat 2019, Brain debuted AutoDelivery, a proof-of-concept autonomous delivery robot designed for retail stores, warehouses, and factories. AutoDelivery, which can tow several cart types, boasts cameras, 4G LTE connectivity, and routing algorithms that allow it to learn its way around a store. AutoDelivery isn’t slated for commercial launch until early 2020.

Izhikevich recently told The Robot Report that Brain is exploring other types of mobile applications, including delivery, eldercare, security and more. In July 2018, Brain led a $13.4 million Series B for Savioke, which makes autonomous delivery robots. For years, Savioke built its autonomous navigation stack from scratch using ROS.

Wenco, Hitachi Construction Machinery announce open ecosystem for autonomous mining

Wenco, Hitachi Construction Machinery announce open ecosystem for autonomous mining

Autonomous mining haulage in Australia. Source: Wenco

TOKYO — Hitachi Construction Machinery Co. last week announced its vision for autonomous mining — an open, interoperable ecosystem of partners that integrate their systems alongside existing mine infrastructure.

Grounded in support for ISO standards and a drive to encourage new entrants into the mining industry, Hitachi Construction Machinery (HCM) said it is pioneering this approach to autonomy among global mining technology leaders. HCM has now publicly declared support for standards-based autonomy and is offering its technology to assist mining customers in integrating new vendors into their existing infrastructure. HCM’s support for open, interoperable autonomy is based on its philosophy for its partner-focused Solution Linkage platform.

“Open innovation is the guiding technological philosophy for Solution Linkage,” said Hideshi Fukumoto, vice president, executive officer, and chief technology officer at HCM. “Based on this philosophy, HCM is announcing its commitment to championing the customer enablement of autonomous mining through an open, interoperable ecosystem of partner solutions.”

“We believe this open approach provides customers the greatest flexibility and control for integrating new autonomous solutions into their existing operations while reducing associated risks and costs of alternative approaches.,” he said.

The HCM Group is developing this open autonomy approach under the Solution Linkage initiative, a platform already available to HCM’s customers in the construction industry now being made available to mining customers with support from HCM subsidiary Wenco International Mining Systems (Wenco).

Three development principles for Wenco, Hitachi

Solution Linkage is a standards-based platform grounded on three principles: open innovation, interoperability, and a partner ecosystem.

In this context, “open innovation” means the HCM Group’s support for open standards to enable the creation of multi-vendor solutions that reduce costs and increase value for customers.

By designing solutions in compliance with ANSI/ISA-95 and ISO standards for autonomous interoperability, Solution Linkage avoids vendor lock-in and offers customers the freedom to choose technologies from preferred vendors independent of their fleet management system, HCM said. This approach future-proofs customer technology infrastructure, providing a phased approach for their incorporation of new technologies as they emerge, claimed the company.

This approach also benefits autonomy vendors who are new to mining, since they will be able to leverage a HCM’s technology and experience in meeting the requirements of mining customers.

The HCM Group’s key capability of interoperability creates simplified connectivity between systems to reduce operational silos, enabling end-to-end visibility and control across the mining value chain. HCM said that customers can use Solution Linkage to connect autonomous equipment from multiple vendors into existing fleet management and operations infrastructure.

The interoperability principle could also provide mines a systems-level understanding of their pit-to-port operation, providing access to more robust data analytics and process management. This capability would enable mine managers to make superior decisions based on operation-wide insight that deliver end-to-end optimization, said HCM.

Wenco and Hitachi have set open interoperability as goals for mining automation

Mining customers think about productivity and profitability throughout their entire operation, from geology to transportation — from pit to port. Source: Wenco

HCM’s said its partner ecosystem will allow customers and third-party partners to use its experience and open platform to successfully provide autonomous functionality and reduce the risk of technological adoption. This initiative is already working with a global mining leader to integrate non-mining OEM autonomous vehicles into their existing mining infrastructure.

Likewise, HCM is actively seeking customer and vendor partnerships to further extend the value of this open, interoperable platform. If autonomy vendors have already been selected by a customer and are struggling to integrate into the client’s existing fleet management system or mine operations, Hitachi may be able to help using the Solution Linkage platform.

The HCM Group will reveal further details of its approach to open autonomy and Solution Linkage in a presentation at the CIM 2019 Convention, running April 28 to May 1 at the Palais de Congrès in Montreal, Canada. Fukumoto and other senior executives from Hitachi and Wenco will discuss this strategy and details of Hitachi’s plans for mining in several presentations throughout the event. The schedule of Hitachi-related events is as follows:

  • Sunday, April 28, 4:30 PM — A welcome speech at the event’s Opening Ceremonies by Wenco Board Member and HCM Executive Officer David Harvey;
  • Monday, April 29, 10:00 AM — An Innovation Stage presentation on the Solution Linkage vision for open autonomy by Wenco Board Member and HCM Vice President and Executive Officer, CTO Hideshi Fukumoto;
  • Monday, April 29, 12:00 PM — Case Study: Accelerating Business Decisions and Mine Performance Through Operational Data Analysis at an Australian Coal Operation technical breakout presentation by Wenco Executive Vice-President of Corporate Strategy Eric Winsborrow;
  • Monday, April 29, 2:00 PM — Toward an Open Standard in Autonomous Control System Interfaces: Current Issues and Best Practices technical breakout presentation by Wenco Director of Technology Martin Politick;
  • Tuesday, April 30, 10:00 AM — An Innovation Stage presentation on Hitachi’s vision for data and IoT in mining by Wenco Executive Vice-President of Corporate Strategy Eric Winsborrow;
  • Wednesday, May 1, 4:00 PM — A concluding speech at the event’s closing luncheon by Wenco Board Member and HCM General Manager of Solution Business Center Yoshinori Furuno.

These presentations further detail the ongoing work of HCM and support the core message about open, interoperable, partner ecosystems.

To learn more about the HCM announcement in support of open and interoperable mining autonomy, Solution Linkage, or other HCM’s solutions, please contact Hitachi Construction Machinery.

Giving robots a better feel for object manipulation


A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch – and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials – “particles” – interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration – such as a “T” shape – that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to build this type of intuitive model for robots to enable them to do what humans can do.”

“When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

A new “particle simulator” developed by MIT improves robots’ abilities to mold materials into simulated target shapes and interact with solid objects and liquids. This could give robots a refined touch for industrial applications or for personal robotics. | Credit: MIT

Dynamic graphs

A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material – rigid, deformable, and liquid – to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”

Editor’s Note: This article was republished with permission from MIT News.

Robotics investments recap: March 2019

CloudMinds was among the robotics companies receiving funding in March 2019.

CloudMinds was among the robotics companies receiving funding in March 2019. Source: CloudMinds

Investments in robots, autonomous vehicles, and related systems totaled at least $1.3 billion in March 2019, down from $4.3 billion in February. On the other hand, automation companies reported $7.8 billion in mergers and acquisitions last month. While that may represent a slowdown, note that many businesses did not specify the amounts involved in their transactions, of which there were at least 58 in March.

Self-driving cars and trucks — including machine learning and sensor technologies — continued to receive significant funding. Although Lyft’s initial public offering was not directly related to autonomous vehicles, it illustrates the investments flowing for transportation.

Other use cases represented in March 2019 included surgical robotics, industrial automation, and service robots. See the table below, which lists amounts in millions of dollars where they were available:

CompanyAmt. (M$)TypeLead investor, partner, acquirerDateTechnology
Airbiquity15investmentDenso Corp., Toyota Motor Corp., Toyota Tsushu Corp.March 12, 2019connected vehicles
AROMA BIT Inc.2.2Series ASony Innovation FundMarch 3, 2019olofactory sensors
AtomRobotSeries B1Y&R CapitalMarch 5, 2019industrial automation
Automata7.4Series AABB March 19, 2019robot arm
Avidbots23.6Series BTrue VenturesMarch 21, 2019commercial floor cleaning
BoranetSeries AGobi PartnersMarch 6, 2019IIoT, machine vision
Broadmann1711Series AOurCrowdMarch 6, 2019deep learning, autonomous vehicles
Cloudminds300investmentSoftBank Vision FundMarch 26, 2019service robots
Corindus4.8private placementMarch 12, 2019surgical robot
Determined AI11Series AGV (Google Ventures)March 13, 2019AI, deep learning
Emergen Group29Series BQiming Venture PartnersMarch 13, 2019industrial automation
Fabu Technologypre-Series AQingsong FundMarch 1, 2019autonomous vehicles
FortnarecapitalizationThomas H. Lee PArtners LPMarch 27, 2019materlais handling
ForwardX14.95Series BHupang Licheng FundMarch 21, 2019autonomous mobile robots
Gaussian Robotics14.9Series BGrand Flight InvestmentMarch 20, 2019cleaning
Hangzhou Guochen Robot Technology15Series AHongcheng Capital, Yingshi Fund (YS Investment) March 13, 2019robotics R&D
Hangzhou Jimu Technology Co.Series BFlyfot VenturesMarch 6, 2019autonomous vehicles
InnerSpace3.2seedBDC Capital's Women in Technology FundMarch 26, 2019IoT
Innoviz Technologies132Series CChina Merchants Capital, Shenzhen Capital Group, New Alliance CapitalMarch 26, 2019lidar
Intelligent MarkinginvestmentBenjamin CapitalMarch 6, 2019autonomous robots for marking sports fields
Kaarta Inc.6.5Series AGreenSoil Building Innovation FundMarch 21, 2019lidar mapping
Kolmostar Inc.10Series AMarch 5, 2019positioning technology
Linear Labs4.5seedScience Inc., Kindred VenturesMarch 26, 2019motors
MELCO Factory Automation Philippines Inc.2.38new divisionMitsubishi Electric Corp.March 12, 2019industrial automation
Monet Technologies4.51joint ventureHonda Motor Co., Hino Motors Ltd., SoftBank Corp., Toyota Motor CorpMarch 28, 2019self-driving cars
Ouster60investmentRunway Growth Capital, Silicon Valley BankMarch 25, 2019lidar
Pickle Robot Co.3.5equity saleMarch 4, 2019loading robot
Preteckt2seedLos Olas Venture CapitalMarch 26, 2019machine learning automotive
Radar16investmentSound Ventures, NTT Docomo Ventures, Align Ventures, Beanstalk Ventures, Colle Capital, Founders Fund Pathfinder, Novel TMTMarch 28, 2019RFID inventory management
Revvo (IntelliTire)4Series ANorwest Venture PartnersMarch 26, 2019smart tires
Shanghai Changren Information Technology14.89Series AMarch 15, 2019Xiaobao healthcare robot
TakeOff Technologies Inc.equity saleMarch 26, 2019grocery robots
TartanSense2seedOmnivore, Blume Ventures, BEENEXTMarch 11, 2019weeding robot
Teraki2.3investmentHorizon Ventures, American Family VenturesMarch 27, 2019AI, automotive electronics
Think Surgical134investmentMarch 11, 2019surgical robot
Titan Medical25IPOMarch 22, 2019surgical robotics
TMiRobSeries B+Shanghai Zhangjiang Torch Venture Capital March 26, 2019hospital robot
TOYO Automation Co.investmentYamaha Motor Co.March 20, 2019actuators
UbtechinvestmentLiangjiang CapitalMarch 6, 2019humanoid
Vintra4.8investmentBonfire Ventures, Vertex Ventures, London Venture PartnersMarch 11, 2019machine vision
Vtrus2.9investmentMarch 8, 2019drone inspection
Weltmeister Motor450Series CBaidu Inc.March 11, 2019self-driving cars

And here are the mergers and acquisitions:

March 2019 robotics acquisitions

CompanyAmt. (M$)AcquirerDateTechnology
Accelerated DynamicsAnimal Dynamics3/8/2019AI, drone swarms
Astori AS4Subsea3/19/2019undersea control systems
BrainlabSmith & Nephew3/12/2019surgical robot
Figure Eight175Appen Ltd.3/10/2019AI, machine learning
Floating Point FXCycloMedia3/7/2019machine vision, 3D modeling
Florida Turbine Technologies60Kratos Defense and Security Solutions3/1/2019drones
Infinity Augmented RealityAlibaba Group Holding Ltd.3/21/2019AR, machine vision
Integrated Device Technology Inc.6700Renesas3/30/2019self-driving vehicle processors
MedineeringBrainlab3/20/2019surgical
Modern Robotics Inc.0.97Boxlight Corp.3/14/2019STEM
OMNI Orthopaedics Inc.Corin Group3/6/2019surgical robotics
OrthoSpace Ltd.220Stryker Corp.3/14/2019surgical robotics
Osiris Therapeutics660Smith & Nephew3/12/2019surgical robotics
Restoration Robotics Inc.21Venus Concept Ltd.3/15/2019surgical robotics
Sofar Ocean Technologies7Spoondrift, OpenROV3/28/2019underwater drones, sensors
Torc Robotics Inc.Daimler Trucks and Buses Holding Inc.3/29/2019driverless truck software

Surgical robots make the cut

One of the largest transactions reported in March 2019 was Smith & Nephew’s purchase of Osiris Therapeutics for $660 million. However, some Osiris shareholders are suing to block the acquisition because they believe the price that U.K.-based Smith & Nephew is offering is too low. The shareholders’ confidence reflects a hot healthcare robotics space, where capital, consolidation, and chasing new applications are driving factors.

In the meantime, Stryker Corp. bought sports medicine provider OrthoSpace Ltd. for $220 million. The market for sports medicine will experience a compound annual growth rate of 8.9% between now and 2023, predicts Market Research Future.

Freemont, Calif.-based Think Surgical raised $134 million for its robot-assisted orthopedic surgical device, and Titan Medical closed a $25 million public offering last month.

Venus Concept Ltd. merged with hair-implant provider Restoration Robotics for $21 million, and Shanghai Changren Information Technology raised Series A funding of $14.89 million for its Xiaobao healthcare robot.

Corindus Vascular Robotics Inc. added $5 million to the $15 million it had raised the month before. Brainlab acquired Medineering and was itself acquired by Smith & Nephew.

Driving toward automation in March 2019

Aside from Lyft, the biggest reported transportation robotics transaction in March 2019 was Renesas’ completion of its $6.7 billion purchase of Integrated Device Technology Inc. for its self-driving car chips.

The next biggest deal was Weltmeister Motor’s $450 million Series C, in which Baidu Inc. participated.

Lidar also got some support, with Innoviz Technologies raising $132 million in a Series C round, and Ouster raising $60 million. In a prime example of how driverless technology is “paying a peace dividend” to other applications, Google parent Alphabet’s Waymo unit offered its custom lidar sensors to robotics, security, and agricultural companies.

Automakers recognize the need for 3-D modeling, sensors, and software for autonomous vehicles to navigate safely and accurately. A Daimler unit acquired Torc Robotics Inc., which is working on driverless trucks, and CycloMedia acquired machine vision firm Floating Point FX. The amounts were not specified.

Speaking of machine learning, Appen Ltd. acquired dataset annotation company Figure Eight for $175 million, with an possible $125 million more based on 2019 performance. Denso Corp. and Toyota Motor Corp. contributed $15 million to Airbiquity, which is working on connected vehicles.

Service robots clean up

From retail to cleaning and customer service, the combination of improving human-machine interactions, ongoing staffing turnover and shortages, and companies with round-the-clock operations has contributed to investor interest.

The SoftBank Vision Fund participated in a $300 million round for CloudMinds. The Chinese AI and robotics company’s XR-1 is a humanoid service robot, and it also makes security robots and connects robots to the cloud.

According to its filing with the U.S. Securities and Exchange Commission, TakeOff Technologies Inc. raised an unspecified amount for its grocery robots, an area that many observers expect to grow as consumers become more accustomed to getting home deliveries.

On the cleaning side, Avidbots raised $23.6 million in Series B, led by True Ventures. Gaussian Robotics’ Series B was $14.9 million, with participation from Grand Flight Investment.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Wrapping up Q1 2019

China’s efforts to develop its domestic robotics industry continued, as Emergen Group’s $29 million Series B round was the largest reported investment in industrial automation last month.

Hangzhou Guochen Robot Technology raised $15 million in Series A funding for robotics research and development and integration.

That was followed by ABB’s participation in Series A funding of $7.4 million for Automata, which makes a small collaborative robot arm named Ava. Mitsubishi Electric Corp. said it’s spending $2.38 million to set up a new company, MELCO Factory Automation Philippines Inc., because it expects to grow its business there to $30 million by 2026.

Data startup Spopondrift and underwater drone maker OpenROV merged to form Sofar Ocean Technologies. The new San Francisco company also announced a Series A round of $7 million. Also, 4Subsea acquired underwater control systems maker Astori AS.

In the aerial drone space, Kratos Defense and Security Solutions acquired Florida Turbine Technologies for $60 million, and Vtrus raised $2.9 million for commercializing drone inspections. Kaarta Inc., which makes a lidar for indoor mapping, raised $6.5 million.

The Robot Report broke the news of Aria Insights, formerly known as CyPhy Works, shutting down in March 2019.


Editors Note: What defines robotics investments? The answer to this simple question is central in any attempt to quantify robotics investments with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and Investing
Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and Intelligent Systems Companies
Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, think, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification
Funding information is collected from a number of public and private sources. These include press releases from corporations and investment groups, corporate briefings, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded.

The post Robotics investments recap: March 2019 appeared first on The Robot Report.

Reinforcement learning, YouTube teaching robots new tricks

The sun may be setting on what David Letterman would call “Stupid Robot Tricks,” as intelligent machines are beginning to surpass humans in a wide variety of manual and intellectual pursuits. In March 2016, Google’s DeepMind software program AlphaGo defeated the reining Go champion, Lee Sedol. Go, a Chinese game that originated more than 3,000…

The post Reinforcement learning, YouTube teaching robots new tricks appeared first on The Robot Report.

OptoForce releases new software for Universal Robots

OptoForce, a provider of multi-axis force and torque sensors, has completely rewritten its core software. This results in new capabilities and automation tasks not previously available for Universal Robots industrial robots. The new developments also feature greater speed for integration on many industrial robotic functions. For example, the speed of pin insertion and path recording with…

The post OptoForce releases new software for Universal Robots appeared first on The Robot Report.