U.S. Robotics Roadmap calls for white papers for revision

U.S. Robotics Roadmap calls for white papers for revision

The U.S. National Robotics Roadmap was first created 10 years ago. Since then, government agencies, universities, and companies have used it as a reference for where robotics is going. The first roadmap was published in 2009 and then revised in 2013 and 2016. The objective is to publish the fourth version of the roadmap by summer 2020.

The team developing the U.S. National Robotics Roadmap has put out a call to engage about 150 to 200 people from academia and industry to ensure that it is representative of the robotics community’s view of the future. The roadmap will cover manufacturing, service, medical, first-responder, and space robotics.

The revised roadmap will also include considerations related to ethics and workforce. It will cover emerging applications, the key challenges to progress, and what research and development is needed.

Join community workshops

Three one-and-a-half-day workshops will be organized for community input to the roadmap. The workshops will take place as follows:

  • Sept. 11-12 in Chicago (organized by Nancy Amato, co-director of the Parasol Lab at Texas A&M University and head of the Department of Computer Science at the University of Ilinois at Urbana-Champaign)
  • Oct. 17-18 in Los Angeles (organized by Maja Mataric, Chan Soon-Shiong distinguished professor of computer science, neuroscience, and pediatrics at the University of Southern California)
  • Nov. 15-16 in Lowell, Mass. (organized by Holly Yanco, director of the NERVE Center at the University of Massachusetts Lowell)

Participation in these workshops will be by invitation only. To participate, please submit a white paper/position statement of a maximum length of 1.5 pages. What are key use cases for robotics in a five-to-10-year perspective, what are key limitations, and what R&D is needed in that time frame? The white paper can address all three aspects or focus on one of them. The white paper must include the following information:

  • Name, affiliation, and e-mail address
  • A position statement (1.5 pages max)

Please submit the white paper as regular text or as a PDF file. Statements that are too long will be ignored. Position papers that only focus on current research are not appropriate. A white paper should present a future vision and not merely discuss state of the art.

White papers should be submitted by end of the day Aug. 15, 2019, to roadmapping@robotics-vo.org. Late submissions may not be considered. We will evaluate submitted white papers by Aug. 18 and select people for the workshops by Aug. 19.

Roadmap revision timeline

The workshop reports will be used as the basis for a synthesis of a new roadmap. The nominal timeline is:

  • August 2019: Call for white papers
  • September – November 2019: Workshops
  • December 2019: Workshops reports finalized
  • January 2020: Synthesis meeting at UC San Diego
  • February 2020: Publish draft roadmap for community feedback
  • April 2020: Revision of roadmap based on community feedback
  • May 2020: Finalize roadmap with graphics design
  • July 2020: Publish roadmap

If you have any questions about the process, the scope, etc., please send e-mail to Henrik I Christensen at hichristensen@eng.ucsd.edu.

U.S. Robotics Roadmap calls for reviewers

Henrik I Christensen spoke at the Robotics Summit & Expo in Boston.

Editor’s note: Christensen, Qualcomm Chancellor’s Chair of Robot Systems at the University of California San Diego and co-founder of Robust AI, delivered a keynote address at last month’s Robotics Summit & Expo, produced by The Robot Report.

The post U.S. Robotics Roadmap calls for white papers for revision appeared first on The Robot Report.

Freedom Robotics raises seed funding for robotics dev tools, fleet controls

Freedom Robotics fleet management

RMS enables fleet management and troubleshooting. Source: Freedom Robotics

SAN FRANCISCO — Freedom Robotics Inc. today announced that it has closed a $6.6 million seed round. The company provides a cloud-based software development infrastructure for managing fleets of robots.

Freedom Robotics cited a study by the World Economic Forum stating that, by 2025, machines will perform more tasks than humans, creating 58 million jobs worldwide. The company plans to use its funding to build its team and technology.

Freedom Robotics claimed that robotics startups can get their products to market 10 times faster by using its tools to do the “undifferentiated heavy lifting” rather than devoting employees to developing a full software stack. The company said its platform-agnostic Robotics Management Software (RMS) provides the “building blocks” for prototyping, building, operating, and scaling robot fleets.

Freedom Robotics builds RMS for developers

“We’ve seen that robotics is hard,” observed Dimitri Onistsuk, co-founder of Freedom Robotics. “In sixth grade, I wrote a letter to myself saying that I would go to MIT, drop out, and found a company that would change the world.”

Onistsuk did go to MIT, drop out, and draw on his experiences with Hans Lee and Joshua Wilson, now chief technology officer and CEO, respectively, at Freedom Robotics.

“We had been building things together before there was a cloud,” recalled Onistsuk. “Now in robotics, very few people have the ability to build a full stack.”

“We see robotics developers who have wonderful applications, like caring for the elderly; transportation; or dull, dirty, and dangerous work,” he said. “Everyone agrees on the value of this area, but they don’t realize the complexity of day-to-day iteration, which requires many engineers and a lot of infrastructure for support.”

“Robotics is like the Web in 2002, where everyone who wants to make an attempt has to raise $10 million and get expert talent in things like computer vision, mechatronics, systems integration, and ROS,” Onistsuk told The Robot Report. “It costs a lot of money to try even once to get a product to market.”

“We’ve combined layers of distinct software services by bringing modern software-development techniques into robotics, which traditionally had a hardware focus,” he said. “You can use one or many — whatever you have to do to scale.”

‘AWS for robots’

Freedom Robotics said that its cloud-based tools can be installed with just a single line of code, and its real-time visualization tools combine robotics management and analysis capabilities that were previously scattered across systems.

“Developers are always trying to improve their processes and learn new things,” said Onistsuk. “Amazon Web Services allows you to bring up a computer with a single line of code. We spent most of the first six months as a company figuring out how to do that for robots. We even bought the domain name ’90 seconds to go.'”

“You can drop in one line of code and immediately see real-time telemetry and have a cloud link to a robot from anywhere in the world,” he said. “Normally, when you want to adopt new components and are just trying to build a robot where the components talk to one another, that can take months.”

“During one on-boarding call, a customer was able to see within two minutes real-time telemetry from robots,” Onistsuk said. “They had never seen sensor-log and live-streaming data together. They thought the video was stuttering, but then an engineer noticed an error in a robot running production software. The bug had already been pushed out to customers. They never had the tools before to see all data in one place in developer-friendly ways.”

“That is the experience we’re getting when building software alongside the people who build robots,” he said. “With faster feedback loops, companies can iterate 10 times faster and move developers to other projects.”

https://www.freedomrobotics.ai/careers

Freedom Robotics’ RMS combines robotics tools to help developers and robotics managers. Source: Freedom Robotics

The same tools for development, management

Onistsuk said that his and Lee’s experience led them to follow standard software-development practices. “Some truths are real — for your core infrastructure, you shouldn’t have to own computers — our software is cloud-based for that reason,” he said.

“We stand on the shoulders of giants and practice what we preach,” Onistsuk asserted. “Pieces of our underlying infrastructure run on standard clouds, and we follow standard ways of building them.”

He said that not only does Freedom Robotics offer standardized development tools; it also uses them to build its RMS.

“With a little thought, for anything that you want to do with our product, you have access to the API calls across the entire fleet,” said Onistsuk. “We used the same APIs to build the product as you would use to run it.”

Freedom Robotics resource monitoring

Resource monitoring with RMS. Source: Freedom Robotics

Investors and interoperability

Initialized Capital led the funding round, with participation from Toyota AI Ventures, Green Cow Venture Capital, Joe Montana’s Liquid 2 Ventures, S28 Capital partner Andrew Miklas, and James Lindenbaum. They joined existing investors Kevin Mahaffey, Justin Kan, Matt Brezina, Arianna Simpson, and Josh Buckley.

“We’ll soon reach a point when there are more robots than cell phones, and we’ll need the ‘Microsoft of robotics’ platform to power such a massive market,” said Garry Tan, managing partner at Initialized Capital, which has backed companies such as Instacart, Coinbase, and Cruise.

“Cloud learning will be a game-changer for robotics, allowing the experience of one robot to be ‘taught’ to the rest on the network. We’ve been looking for startups with the technology and market savvy to realize this cloud robotics future through fleet management, control, and analytics,” said Jim Adler, founding managing director at Toyota AI Ventures. “We were impressed with Freedom Robotics’ customer-first, comprehensive approach to managing and controlling fleets of robots and look forward to supporting the Freedom team as they make cloud robotics a market reality.”

“We found out about Toyota AI Ventures through its Twitter account,” said Onistsuk. “We got some referrals and went and met with them. As the founder of multiple companies, Jim [Adler] understood us in a way that industry-specific VCs couldn’t. He got our experience in robotics, building teams, and data analytics.”

What about competing robotics development platforms? “We realized from Day 1 that we shouldn’t be fighting,” Onistsuk replied. “We’re fully integrated with the cloud offerings of Amazon, Google, and Microsoft, as well as ROS. We have drop-in compatibility.”

“What we’re trying to power with that is allowing developers to build things that differentiate their products and services and win customers,” he added. “This is similar to our cloud-based strategy. We try to be hardware-agnostic. We want RMS to work out of the box with as many tools and pieces of hardware as possible so that people can try things rapidly.”

Freedom Robotics raises seed funding for robotic dev tools, fleet controls

The Freedom Robotics team has raised seed funding. Source: Freedom Robotics

Hardware gets commoditized

“Hardware is getting commoditized and driving market opportunity,” said Onistsuk. “For instance, desktop compute is only $100 — not just Raspberry Pi, but x86 — you can buy a real computer running a full operating system.”

“Sensors are getting cheaper thanks to phones, and 3D printing will affect actuators. NVIDIA is putting AI into a small, low-power form factor,” he added. “With cheaper components, we’re looking for $5,000 robot arms rather than $500,000 arms, and lots of delivery companies are looking to make a vehicle autonomous and operating at a price point that’s competitive.”

“Companies can use RMS to build their next robots as a service [RaaS], and we’ve worked with everything from the largest entertainment companies to sidewalk delivery startups and multibillion-dollar delivery companies,” Onistsuk said. “Freedom Robotics is about democratizing robotics development and removing barriers to entry so that two guys in a garage can scale out to a business because of demand. The dreams of people with real needs in robotics will cause the next wave of innovation.”

“Software infrastructure is hard to do — we take what many developers consider boring so that they can sell robots into businesses or the home that get better over time,” he said.

Freedom Robotics logo

‘Inspiring’ feedback

Customer feedback so far has been “overwhelmingly inspiring,” said Onistsuk. “The best moments are getting an e-mail from a customer saying, ‘We’re using your product, and we thought we didn’t want some login or alerting plug-in. We have a demo tomorrow, and it would take four months to build it, but you can do it.'”

“We’ve seen from our interactions that the latest generation of robotics developers has different expectations,” he said. “We’re seeing them ‘skating to where the puck is,’ iterating quickly to build tools and services around our roadmap.”

“The RMS is not just used by developers,” Onistsuk said. “Development, operations, and business teams can find and solve problems in a collaborative way with the visualization tool. We can support teams managing multiple robots with just a tablet, and it integrates with Slack.”

“We can go from high-level data down to CPU utilization,” Lee said. “With one click, you can get a replay of GPS and telemetry data and see every robot with an error. Each section is usually one engineer’s responsibility.”

“A lot of times, people develop robots for university research or an application, but how does the robot perform in the field when it’s in a ditch?” said Lee. “We can enable developers to make sure robots perform better and safer.”

Freedom Robotics is currently being used in industries including agriculture, manufacturing, logistics, and restaurants, among others.

“This is similar to getting dev done in minutes, not months, and it could speed up the entire robotics industry,” Onistsuk added. “Investors are just as excited about the team, scaling the business, and new customers as I am.”

The post Freedom Robotics raises seed funding for robotics dev tools, fleet controls appeared first on The Robot Report.

Challenges of building haptic feedback for surgical robots


Minimally invasive surgery (MIS) is a modern technique that allows surgeons to perform operations through small incisions (usually 5-15 mm). Although it has numerous advantages over older surgical techniques, MIS can be more difficult to perform. Some inherent drawbacks are:

  • Limited motion due to straight laparoscopic instruments and fixation enforced by the small incision in the abdominal wall
  • Impaired vision, due the two-dimensional imaging
  • Usage of long instruments amplifies the effects of surgeon’s tremor
  • Poor ergonomics imposed to the surgeon
  • Loss of haptic feedback, which is distorted by friction forces on the instrument and reactionary forces from the abdominal wall.

Minimally Invasive Robotic Surgery (MIRS) offers solutions to either minimize or eliminate many of the pitfalls associated with traditional laparoscopic surgery. MIRS platforms such as Intuitive Surgical’s da Vinci, approved by the U.S. Food and Drug Administration in 2000, represent a historical milestone of surgical treatments. The ability to leverage laparoscopic surgery advantages while augmenting surgeons’ dexterity and visualization and eliminating the ergonomic discomfort of long surgeries, makes MIRS undoubtedly an essential technology for the patient, surgeons and hospitals.

However, despite all improvements brought by currently commercially available MIRS, haptic feedback is still a major limitation reported by robot-assisted surgeons. Because the interventionist no longer manipulates the instrument directly, the natural haptic feedback is eliminated. Haptics is a conjunction of both kinesthetic (form and shape of muscles, tissues and joints) as well as tactile (cutaneous texture and fine detail) perception and is a combination of many physical variables such as force, distributed pressure, temperature and vibration.

Direct benefits of sensing interaction forces at the surgical end-effector are:

  • Improved organic tissue characterization and manipulation
  • Assessment of anatomical structures
  • Reduction of sutures breakage
  • Overall increase on the feeling of assisted robotics surgery.

Haptic feedback also plays a fundamental role in shortening the learning curve for young surgeons in MIRS training. A tertiary benefit of accurate real-time direct force measurement is that the data collected from these sensors can be utilized to produce accurate tissue and organ models for surgical simulators used in MIS training. Futek Advanced Sensor Technology, an Irvine, Calif.-based sensor manufacturer, shared these tips on how to design and manufacture haptic sensors for surgical robotics platforms.

With a force, torque and pressure sensor enabling haptic feedback to the hands of the surgeon, robotic minimally invasive surgery can be performed with higher accuracy and dexterity while minimizing trauma to the patient. | Credit: Futek

Technical and economic challenges of haptic feedback

Adding to the inherent complexity of measuring haptics, engineers and neuroscientists also face important issues that require consideration prior to the sensor design and manufacturing stages. The location of the sensing element, which significantly influences the measurement consistency, presents MIRS designers with a dilemma: should they place the sensor outside the abdomen wall near the actuation mechanism driving the end-effector (a.k.a. Indirect Force Sensing), or inside the patient at the instrument tip, embedded on the end-effector (a.k.a. Direct Force Sensing).

The pros and cons of these two approaches are associated with measurement accuracy, size restrictions and sterilization and biocompatibility requirements. Table 1 compares these two force measurement methods.

In the MIRS applications, where very delicate instrument-tissue interaction forces need to give precise feedback to the surgeon, measurement accuracy is sine qua non, which makes intra-abdominal direct sensing the ideal option.

However, this novel approach not only brings the design and manufacturing challenges described in Table 1 but also demands higher reusability. Commercially available MIRS systems that are modular in design allow the laparoscopic instrument to be reutilized approximately 12 to 20 times. Adding the sensing element near to the end-effector invariably increases the cost of the instrument and demands further consideration during the design stage in order to enhance sensor reusability.

Appropriate electronic components, strain measurement method and electrical connections have to withstand additional autoclavable cycles as well as survive a high PH washing. Coping with these special design requirements invariably increases the unitary cost per sensor. However, extended lifespan and number of cycles consequently reduces the cost per cycle and brings financial affordability to direct measurement method.

Hermeticity of high precision sub-miniature load sensing elements is equally challenging to intra-abdominal direct force measurement. The conventional approach to sealing electronic components is the adoption of conformal coatings, which are extensively used in submersible devices. As much as this solution provides protection in low-pressure water submersion environments for consumer electronics, coating protection is not sufficiently airtight and is not suitable for high-reliability medical, reusable and sterilizable solutions.

Under extreme process controls, conformal coatings have shown to be marginal and provide upwards of 20 to 30 autoclave cycles. The autoclave sterilization process presents a harsher physicochemical environment using high pressure and high temperature saturated steam. Similar to helium leak detection technology, saturated steam particles are much smaller in size compared to water particles and are capable of penetrating and degrading the coating over time causing the device to fail in a hardly predictable manner.

An alternative and conventional approach to achieving hermeticity is to weld on a header interface to the sensor. Again, welding faces obstacles in miniaturized sensors due to its size constraints. All in all, a novel and robust approach is a monolithic sensor using custom formulated, Ct matched, chemically neutral, high temperature fused isolator technology used to feed electrical conductors through the walls of the hermetically sealed active sensing element. The fused isolator technology has shown reliability in the hundreds to thousands of autoclave cycles.


The Robot Report launched the Healthcare Robotics Engineering Forum (Dec. 9-10 in Santa Clara, Calif.). The conference and expo focuses on improving the design, development and manufacture of next-generation healthcare robots. The Healthcare Robotics Engineering Forum is currently accepting speaking proposals through July 26, 2019. To submit a proposal, fill out this form.


Other design considerations for haptic feedback

As aforementioned, miniaturization, biocompatibility, autoclavability and high reusability are some of the unique characteristics imposed to a haptic sensor by the surgical environment. In addition, it is imperative that designers also meet requirements that are inherent to any high-performance force measurement device.

Extraneous loads (or crosstalk) compensation, provides optimal resistance to off-axis loads to assure maximum operating life and minimize reading errors. Force and torque sensors are engineered to capture forces along the Cartesian axes, typically X, Y and Z. From these three orthogonal axes, one to six measurement channels derives three force channels (Fx, Fy and Fz) and three torque or moment channels (Mx, My and Mz). Theoretically, a load applied along one of the axes should not produce a measurement in any of the other channels, but this is not always the case. For a majority of force sensors, this undesired cross-channel interference will be between 1 and 5% and, considering that one channel can capture extraneous loads from five other channels, the total crosstalk could be as high as 5 to 25%.

In robotic surgery, the sensor must be designed to negate the extraneous or cross-talk loads, which include frictions between the end-effector instrument and trocar, reactionary forces from the abdominal wall and gravitational effect of mass along the instrument axis. In some occasions, miniaturized sensors are very limited in space and have to compensate side loads using alternate methods such as electronic or algorithmic compensation.

haptic sensorsCalibration of direct inline force sensor imposes restrictions as well. The calibration fixtures are optimized with SR buttons to direct load precisely through the sensor of the part. If the calibration assembly is not equipped with such arrangements, the final calibration might be affected by parallel load paths.

Thermal effect is also a major challenge in strain measurement. Temperature variations cause material expansion, gage factor coefficient variation and other undesirable effects on the measurement result. For this reason, temperature compensation is paramount to ensure accuracy and long-term stability even when exposed to severe ambient temperature oscillations.

The measures to counteract temperature effects on the readings are:

  • The use of high-quality, custom and self-compensated strain gages compatible with the thermal expansion coefficient of the sensing element material
  • Use of half or full Wheatstone bridge circuit configuration installed in both load directions (tension and compression) to correct for temperature drift
  • Fully internally temperature compensation of zero balance and output range without the necessity of external conditioning circuitry.

In some special cases, the use of custom strain gages with reduced solder connections helps reduce temperature impacts from solder joints. Usually, a regular force sensor with four individual strain gages has upwards of 16 solder joints, while custom strain elements can reduce this down to less than six. This design consideration improves reliability as the solder joint, as an opportunity for failure, is significantly reduced.

During the design phase, it is also imperative to consider such sensors to meet high reliability along with high-volume manufacturability, taking into consideration the equipment and processes that will be required should a device be designated for high-volume manufacturing. The automated, high-volume processes could be slightly or significantly different than the benchtop or prototype equipment used for producing lower volumes. The scalability must maintain focus on reducing failure points during the manufacturing process, along with failure points that could occur on the field.

Testing for medical applications is more related to the ability of a measurement device that can withstand a high number of cycles rather than resist to strenuous structural stress. In particular for medical sensors, the overload and fatigue testing must be performed in conjunction with the sterilization testing in an intercalated process with several cycles of fatigue and sterilization testing. The ability to survive hundreds of overload cycles while maintaining hermeticity translates into a failure-free, high- reliability sensor with lower MTBF and more competitive total cost of ownership.

haptic sensors

Credit: Futek

Product development challenges

Although understanding the inherent design challenges of the haptic autoclavable sensor is imperative, the sensor manufacturer must be equipped with a talented multidisciplinary engineering team, in-house manufacturing capabilities supported by fully developed quality processes and product/project management proficiency to handle the complex, resource-limited, and fast-paced new product development environment.

A multidisciplinary approach will result in a sensor element that meets the specifications in terms of nonlinearity, hysteresis, repeatability and cross-talk, as well as an electronic instrument that delivers analog and digital output, high sampling rate and bandwidth, high noise-free resolution and low power consumption, both equally necessary for a reliable turnkey haptics measurement solution.

Strategic control of all manufacturing processes (machining, lamination, wiring, calibration), allows manufacturers to engineer sensors with a design for manufacturability (DFM) mentality. This strategic control of manufacturing boils down to methodically selecting the bill of material, defining the testing plans, complying with standards and protocols and ultimately strategizing the manufacturing phase based on economic constraints.

The post Challenges of building haptic feedback for surgical robots appeared first on The Robot Report.

Cassie bipedal robot a platform for tackling locomotion challenges

Working in the Dynamic Autonomy and Intelligent Robotics lab at the University of Pennsylvania, Michael Posa (right) and graduate student Yu-Ming Chen use Cassie to help develop better algorithms that can help robots move more like people. | Credit: Eric Sucar

What has two legs, no torso, and hangs out in the basement of the University of Pennsylvania’s Towne Building?

It’s Cassie, a dynamic bipedal robot, a recent addition to Michael Posa’s Dynamic Autonomy and Intelligent Robotics (DAIR) Lab. Built by Agility Robotics, a company in Albany, Oregon, Cassie offers Posa and his students the chance to create and test the locomotion algorithms they’re developing on a piece of equipment that’s just as cutting-edge as their ideas.

“We’re really excited to have it. It offers us capabilities that are really unlike anything else on the commercial market,” says Posa, a mechanical engineer in the School of Engineering and Applied Science. “There aren’t many options that exist, and this means that every single lab that wants to do walking research doesn’t have to spend three years building its own robot.”

Having Cassie lets Posa’s lab members spend all their time working to solve the huge challenge of designing algorithms so that robots can walk and navigate across all kinds of terrain and circumstances.

“What we have is a system really designed for dynamic locomotion,” he says. “We get very natural speed in terms of leg motions, like picking up a foot and putting it down somewhere else. For us, it’s a really great system.”

“It offers us capabilities that are really unlike anything else on the commercial market,” Posa says about Cassie. | Credit: Eric Sucar

Why do the legs matter? Because they dramatically expand the possibilities of what a robot can do. “You can imagine how legged robots have a key advantage over wheeled robots in that they are able to go into unstructured environments. They can go over relatively rough terrain, into houses, up a flight of stairs. That’s where a legged robot excels,” Posa says. “This is useful in all kinds of applications, including basic exploration, but also things like disaster recovery and inspection tasks. That’s what’s drawing a lot of industry attention these days.”

Of course, walking over different terrain or up a curb, step, or other incline dramatically increases what a robot has to do to stay upright. Consider what happens when you walk: Bump into something with your elbow, and your body has to reverse itself to avoid knocking it over, as well as stabilize itself to avoid falling in the opposite direction.

Related: Ford package delivery tests combine autonomous vehicles, bipedal robots

A robot has to be told to do all of that – which is where Posa’s algorithms come in, starting from where Cassie’s feet go down as it takes each step.

“Even with just legs, you have to make all these decisions about where you’re going to put your feet,” he says. “It’s one of those decisions that’s really very difficult to handle because everything depends on where and when you’re going to put your feet down and putting that foot down crates an impact: You shift your weight, which changes your balance, and so on.



“This is a discrete event that happens quickly. From a computational standpoint, that’s one of the things we really struggle with—how do we handle these contact events?”

Then there’s the issue of how to model what you want to tell the robot to do. Simple modeling considers the robot as a point moving in space rather than, for example, a machine with six joints in its leg. But of course, the robot isn’t a point, and working with those models means sacrificing capability. Posa’s lab is trying to build more sophisticated models that, in turn, make the robot move more smoothly.

“We’re interested in the sort of middle ground, this Goldilocks regime between ‘this robot has 12 different motors’ and ‘this robot is a point in space,'” he says.

Related: 2019 the Year of Legged Robots

Cassie’s predecessor was called ATRIAS, an acronym for “assume the robot is a sphere.” ATRIAS allowed for more sophisticated models and more ability to command the robot, but was still too simple, Posa says. “The real robot is always different than a point or sphere. The question is where should our models live on this spectrum, from very simple to very complicated?”

Two graduate students in the DAIR Lab have been working on the algorithms, testing them in simulation and then, finally, on Cassie. Most of the work is virtual, since Cassie is really for testing the pieces that pass the simulation test.

“You write the code there,” says Posa, gesturing at a computer across the lab, “and then you flip a switch and you’re running it with the real robot. In general, if it doesn’t work in the simulator, it’s not going to work in the real world.”

Graduate students, including Chen (left), work on designing new algorithms and running computer simulations before testing them on Cassie. | Credit: Eric Sucar

On the computer, the researchers can take more risks, says graduate student Yu-Ming Chen. “We don’t break the robot in simulation,” he says, chuckling.

So what happens when you take these legs for a spin? The basic operation involves a marching type of step, as Cassie’s metal feet clang against the floor. But even as the robot makes these simple motions, it’s easy to see how the joints and parts work together to make a realistic-looking facsimile of a legged body from the waist down.

With Cassie as a platform, Posa says he’s excited to see how his team can push locomotion research forward.

“We want to design algorithms to enable robots to interact with the world in a safe and productive fashion,” he says. “We want [the robot] to walk in a way that is efficient, energetically, so it can travel long distances, and walk in a way that’s safe for both the robot and the environment.”

Editor’s Note: This article was republished from the University of Pennsylvania.

MIT ‘walking motor’ could help robots assemble complex structures


Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an international robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

New approach to building robots

Their work offers an alternative to today’s approaches to constructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Robots working in confined spaces

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”


Editor’s Note: This article was republished from MIT News.


The post MIT ‘walking motor’ could help robots assemble complex structures appeared first on The Robot Report.

4 Overheating solutions for commercial robotics

4 Overheating solutions for commercial robotics

Stanford University researchers have developed a lithium-ion battery that shuts down before overheating. Source: Stanford University

Overheating can become a severe problem for robots. Excessive temperatures can damage internal systems or, in the most extreme cases, cause fires. Commercial robots that regularly get too hot can also cost precious time, as operators are forced to shut down and restart the machines during a given shift.

Fortunately, robotics designers have several options for keeping industrial robots cool and enabling workflows to progress smoothly. Here are four examples of technologies that could keep robots at the right temperature.

1. Lithium-ion batteries that automatically shut off and restart

Many robots, especially mobile platforms for factories or warehouses, have lithium-ion battery packs. Such batteries are popular and widely available, but they’re also prone to overheating and potentially exploding.

Researchers at Stanford University engineered a battery with a special coating that stops it from conducting electricity if it gets too hot. As the heat level climbed, the layer expanded, causing a functional change that made the battery itself no longer conducive. However, once cool, it starts providing power as usual.

The research team did not specifically test their battery coating in robots powered by lithium-ion batteries. However, it noted that the work has practical merit for a variety of use cases due to how it’s possible to change the heat level that causes the battery to shut down.

For example, if a robot has extremely sensitive internal parts, users would likely want it to shut down at a lower temperature than when using it in a more tolerant machine.

2. Sensors that measure a robot’s ‘health’ to avoid overheating

Commercial robots often allow corporations to achieve higher, more consistent performance levels than would be possible with human effort alone. Industrial-grade robots don’t need rest breaks, but unlike humans who might speak up if they feel unwell and can’t complete a shift, robots can’t necessarily notify operators that something’s wrong.

However, University of Saarland researchers have devised a method that subjects industrial machines to the equivalent of a continuous medical checkup. Similar to how consumer health trackers measure things like a person’s heart rate and activity levels and give them opportunities to share these metrics with a physician, a team aims to do the same with industrial machinery.

Continual robot monitoring

A research team at Saarland University has developed an early warning system for industrial assembly, handling, and packaging processes. Research assistants Nikolai Helwig (left) and Tizian Schneider test the smart condition monitoring system on an electromechanical cylinder. Credit: Oliver Dietze, Saarland University

It should be possible to see numerous warning signs before a robot gets too hot. The scientists explained that they use special sensors that fit inside the machines and can interact with one another as well as a robot’s existing process sensors. The sensors collect baseline data. They can also recognize patterns that could indicate a failing part — such as that the machine gets hot after only a few minutes of operating.

That means the sensors could warn plant operators of immediate issues, like when a robot requires an emergency shutdown because of overheating. It could also help managers understand if certain processes make the robots more likely to overheat than others. Thanks to the constant data these sensors provide, human workers overseeing the robots should have the knowledge they need to intervene before a catastrophe occurs.

Manufacturers already use predictive analytics to determine when to perform maintenance. This approach could provide even more benefits because it goes beyond maintenance alerts and warns if robots stray from their usual operating conditions due because of overheating or other reasons that need further investigation.

3. Thermally conductive rubber

When engineers design robots or work in the power electronics sector, heat dissipation technologies are almost always among the things to consider before the product becomes functional. For example, even in a device that’s 95% efficient, the remaining 5% gets converted into heat that needs to escape.

Power electronics overheating roadmap

Source: Advanced Cooling Technologies

Pumped liquid, extruded heatsinks, and vapor chambers are some of the available methods for keeping power electronics cool. Returning to commercial robotics specifically, Carnegie Mellon University scientists have developed a material that aids in heat management for soft robots. They said their creation — nicknamed “thubber” — combines elasticity with high heat conductivity.

CMU thubber for overheating

A nano-CT scan of “thubber” showing the liquid-metal microdroplets inside the rubber material. Source: Carnegie Mellon University

The material stretches to more than six times its initial length, and that’s impressive in itself. However, the CMIU researchers also mentioned that the blend of high heat conductivity and the flexibility of the material are crucial for facilitating dissipation. They pointed out that past technologies required attaching high-powered devices to inflexible mounts, but they now envision creating these from the thubber.

Then, the respective devices, whether bendable robots or folding electronics, could be more versatile and stay cool as they function.

4. Liquid cooling and fan systems

Many of the cooling technologies used in industrial robots happen internally, so users don’t see them working, but they know everything is functioning as it should since the machine stays at a desirable temperature. Plus, there are some robots for which heat reduction is exceptionally important due to the tasks they assume. Firefighting robots are prime examples.

One of them, called Colossus, recently helped put out the Notre Dame fire in Paris. It has an onboard smoke ventilation system that likely has a heat-management component, too. Purchasers can also pay more to get a smoke-extracting fan. It’s an example of a mobile robot that uses lithium-ion batteries, making it a potential candidate for the first technology on the list.

There’s another firefighting robot called the Thermite, and it uses both water and fans to stay cool. For example, the robot can pump out 500 gallons of water per minute to control a blaze, but a portion of that liquid goes through the machine’s internal “veins” first to keep it from overheating.

In addition, part of Thermite converts into a sprinkler system, and onboard fans help recycle the associated mist and cool the machine’s components.

An array of overheating options

Robots are increasingly tackling jobs that are too dangerous for humans. As these examples show, they’re up to the task as long as the engineers working to develop those robots remain aware of internal cooling needs during the design phase.

This list shows that engineers aren’t afraid to pursue creative solutions as they look for ways to avoid overheating. Although many of the technologies described here are not yet available for people to purchase, it’s worthwhile for developers to stay abreast of the ongoing work. The attempts seem promising, and even cooling efforts that aren’t ready for mainstream use could lead to overall progress.

The post 4 Overheating solutions for commercial robotics appeared first on The Robot Report.

Researchers building modular, self-programming robots to improve HRI

Many work processes would be almost unthinkable today without robots. But robots operating in manufacturing facilities have often posed risks to workers because they are not responsive enough to their surroundings.

To make it easier for people and robots to work in close proximity in the future, Prof. Matthias Althoff of the Technical University of Munich (TUM) has developed a new system called (IMPROV) that uses interconnectable modules for self-programming and self-verification.

When companies use robots to produce goods, they generally have to position their automatic helpers in safety cages to reduce the risk of injury to people working nearby. A new system could soon free the robots from their cages and thus transform standard practices in the world of automation.

Althoff has developed a toolbox principle for the simple assembly of safe robots using various components. The modules can be combined in almost any way desired, enabling companies to customize their robots for a wide range of tasks – or simply replace damaged components. Althoff’s system was presented in a paper in the June 2019 issue of Science Robotics.

Built-in chip enables the robot to program itself

Robots that can be configured individually using a set of components have been seen before. However, each new model required expert programming before going into operation. Althoff has equipped each module in his IMPROV robot toolbox with a chip that enables every modular robot to program itself on the basis of its own individual toolkit.

In the Science Robotics paper, the researchers said “self-programming of high-level tasks was not considered in this work. The created models were used for automatically synthesizing model-based controllers, as well as for the following two aspects.”

Self-verification

To account for dynamically changing environments, the robot formally verified, by itself, whether any human could be harmed through its planned actions during its operation. A planned motion was verified as safe if none of the possible future movements of surrounding humans leads to a collision.

Because uncountable possible future motions of surrounding humans exist, Althoff bound the set of possible motions using reachability analysis. Althoff said the inherently safe approach renders robot cages unnecessary in many applications.

Scientist Christina Miller working on the modular robot arm. Credit: A. Heddergott/TUM

Keeping an eye on the people working nearby

“Our modular design will soon make it more cost-effective to build working robots. But the toolbox principle offers an even bigger advantage: With IMPROV, we can develop safe robots that react to and avoid contact with people in their surroundings,” said Althoff.

With the chip installed in each module and the self-programming functionality, the robot is automatically aware of all data on the forces acting within it as well as its own geometry. That enables the robot to predict its own path of movement.

At the same time, the robot’s control center uses input from cameras installed in the room to collect data on the movements of people working nearby. Using this information, a robot programmed with IMPROV can model the potential next moves of all of the nearby workers. As a result, it can stop before coming into contact with a hand, for example – or with other approaching objects.

“With IMPROV we can guarantee that the controls will function correctly. Because the robots are automatically programmed for all possible movements nearby, no human will be able to instruct them to do anything wrong,” says Althoff.

IMPROV shortens cycle times

For their toolbox set, the scientists used standard industrial modules for some parts, complemented by the necessary chips and new components from the 3D printer. In a user study, Althoff and his team showed that IMPROV not only makes working robots cheaper and safer – it also speeds them up: They take 36% less time to complete their tasks than previous solutions that require a permanent safety zone around a robot.

Editor’s Note: This article was republished from the Technical University of Munich.

Augmenting SLAM with deep learning

Some elements of the Spatial AI real-time computation graph. Click image to enlarge. Credit: SLAMcore

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a robot’s location within it. SLAM is being gradually developed towards Spatial AI, the common sense spatial reasoning that will enable robots and other artificial devices to operate in general ways in their environments.

This will enable robots to not just localize and build geometric maps, but actually interact intelligently with scenes and objects.

Enabling semantic meaning

A key technology that is helping this progress is deep learning, which has enabled many recent breakthroughs in computer vision and other areas of AI. In the context of Spatial AI, deep learning has most obviously had a big impact on bringing semantic meaning to geometric maps of the world.

Convolutional neural networks (CNNs) trained to semantically segment images or volumes have been used in research systems to label geometric reconstructions in a dense, element-by-element manner. Networks like Mask-RCNN, which detect precise object instances in images, have been demonstrated in systems that reconstruct explicit maps of static or moving 3D objects.

Deep learning vs. estimation

In these approaches, the divide between deep learning methods for semantics and hand-designed estimation methods for geometrical estimation is clear. More remarkable, at least to those of us from an estimation background, has been the emergence of learning techniques that now offer promising solutions to geometrical estimation problems. Networks can be trained to predict robust frame-to-frame visual odometry; dense optical flow prediction; or depth prediction from a single image.

When compared to hand-designed methods for the same tasks, these methods are strong on robustness, since they will always make predictions that are similar to real scenarios present in their training data. But designed methods still often have advantages in flexibility in a range of unforeseen scenarios, and in final accuracy due to the use of precise iterative optimization.

The three levels of SLAM, according to SLAMcore. Credit: SLAMcore”

The role of modular design

It is clear that Spatial AI will make increasingly strong use of deep learning methods, but an excellent question is whether we will eventually deploy systems where a single deep network trained end to end implements the whole of Spatial AI.  While this is possible in principle, we believe that this is a very long-term path and that there is much more potential in the coming years to consider systems with modular combinations of designed and learned techniques.

There is an almost continuous sliding scale of possible ways to formulate such modular systems. The end-to-end learning approach is ‘pure’ in the sense that it makes minimum assumptions about the representation and computation that the system needs to complete its tasks. Deep learning is free to discover such representations as it sees fit. Every piece of design which goes into a module of the system or the ways in which modules are connected reduces that freedom. However, modular design can make the learning process tractable and flexible, and dramatically reduce the need for training data.

Building in the right assumptions

There are certain characteristics of the real world that Spatial AI systems must work in that seem so elementary that it is unnecessary to spend training capacity on learning them. These could include:

  • Basic geometry of 3D transformation as a camera sees the world from different views
  • Physics of how objects fall and interact
  • The simple fact that the natural world is made up of separable objects at all
  • Environments are made up of many objects in configurations with a typical range of variability over time which can be estimated and mapped.

By building these and other assumptions into modular estimation frameworks that still have significant deep learning capacity in the areas of both semantics and geometrical estimation, we believe that we can make rapid progress towards highly capable and adaptable Spatial AI systems. Modular systems have the further key advantage over purely learned methods that they can be inspected, debugged and controlled by their human users, which is key to the reliability and safety of products.

We still believe fundamentally in Spatial AI as a SLAM problem, and that a recognizable mapping capability will be the key to enabling robots and other intelligent devices to perform complicated, multi-stage tasks in their environments.

For those who want to read more about this area, please see my paper “FutureMapping: The Computational Structure of Spatial AI Systems.”

Andrew Davison, SLAMcore

About the Author

Professor Andrew Davison is a co-founder of SLAMcore, a London-based company that is on a mission to make spatial AI accessible to all. SLAMcore develops algorithms that help robots and drones understand where they are and what’s around them – in an affordable way.

Davison is Professor of Robot Vision at the Department of Computing, Imperial College London and leads Imperial’s Robot Vision Research Group has spent 20 years conducting pioneering research in visual SLAM, with a particular emphasis on methods that work in real-time with commodity cameras.

He has developed and collaborated on breakthrough SLAM systems including MonoSLAM and KinectFusion, and his research contributions have over 15,000 academic citations. He also has extensive experience of collaborating with industry on the application of SLAM methods to real products.

Kollmorgen to present advanced motion control for commercial robots at Robotics Summit & Expo

Kollmorgen will exhibit its newest motion-centric automation solutions for designers and manufacturers of commercial robots and intelligent systems at the Robotics Summit & Expo 2019. Visitors are invited to Booth 202 to see and participate in a variety of product exhibits and exciting live demos.

Demos and other exhibits have been designed to show how Kollmorgen’s next-generation technology helps robot designers and manufacturers increase efficiency, uptime, throughput, and machine life.

Demonstrations

The AKM2G Servo Motor delivers the best power and torque density on the market, offering OEMs a way to increase performance and speed while cutting power consumption and costs. Highly configurable, with six frame sizes with up to five stack lengths, and a variety of selectable options (such as feedback, mounting, and performance capabilities), the AKM2G can easily be dropped into existing designs.

Robotic Gearmotor Demo: Discover how Kollmorgen’s award-winning frameless motor solutions integrate seamlessly with strain wave gears, feedback devices, and servo drives to form a lightweight and compact robotic joint solution. Kollmorgen’s standard and custom frameless motor solutions enable smaller, lighter, and faster robots.

AGVs and Mobile Robots: Show attendees can learn about Kollmorgen’s flexible, scalable vehicle control solutions for material handling for smart factories and warehouses with AGVs and mobile robots.

Panel discussion

Kollmorgen's Tom Wood will speak at the Robotics Summit & Expo

Tom Wood, Kollmorgen

Tom Wood, frameless motor product specialist at Kollmorgen, will participate in a session at 3:00 p.m. on Wednesday, June 5, in the “Technology, Tools, and Platforms” track at the Robotics Summit & Expo. He will be part of a panel on “Motion Control and Robotics Opportunities,” which will discuss new and improved technologies. The panel will examine how these motion-control technologies are leading to new robotics capabilities, new applications, and entry into new markets.

Register now for the Robotics Summit & Expo, which will be at Boston’s Seaport World Trade Center on June 5-6.

About Kollmorgen

Since its founding in 1916, Kollmorgen’s innovative solutions have brought big ideas to life, kept the world safer, and improved peoples’ lives. Today, its world-class knowledge of motion systems and components, industry-leading quality, and deep expertise in linking and integrating standard and custom products continually delivers breakthrough motion solutions that are unmatched in performance, reliability, and ease of use. This gives machine builders around the world an irrefutable marketplace advantage and provides their customers with ultimate peace of mind.

For more information about Kollmorgen technologies, please visit www.kollmorgen.com or call 1-540-633-3545.

Build better robots by listening to customer backlash

In the wake of the closure of Apple’s autonomous car division (Project Titan) this week, one questions if Steve Jobs’ axiom still holds true. “Some people say, ‘Give the customers what they want.’ But that’s not my approach. Our job is to figure out what they’re going to want before they do,” declared Jobs, who continued with an analogy: “I think Henry Ford once said, ‘If I’d asked customers what they wanted, they would have told me, ‘a faster horse!’” Titan joins a growing graveyard of autonomous innovations, which is filled with the tombstones of BaxterJiboKuri and many broken quadcopters. If anything holds true, not every founder is Steve Jobs or Henry Ford and listening to public backlash could be a bellwether for success.

Adam Jonas of Morgan Stanley announced on Jan. 9, 2019 from the Consumer Electronic Show (CES) floor, “It’s official. AVs are overhyped. Not that the safety, economic, and efficiency benefits of robotaxis aren’t valid and noble. They are. It’s the timing… the telemetry of adoption for L5 cars without safety drivers expected by many investors may be too aggressive by a decade… possibly decades.”

The timing sentiment is probably best echoed by the backlash by the inhabitants of Chandler, Arizona who have been protesting vocally, even resorting to violence, against Waymo’s self-driving trials on their streets. This rancor came to a head in August when a 69-year-old local pointed his pistol at the robocar (and its human safety driver).

In a profile of the Arizona beta trial, The New York Times interviewed some of the loudest advocates against Waymo in the Phoenix suburb. Erik and Elizabeth O’Polka expressed frustration with their elected leaders in turning their neighbors and their children into guinea pigs for artificial intelligence.

Elizabeth adamantly decried, “They didn’t ask us if we wanted to be part of their beta test.” Her husband strongly agreed: “They said they need real-world examples, but I don’t want to be their real-world mistake.” The couple has been warned several times by the Chandler police to stop attempting to run Waymo cars off the road. Elizabeth confessed to the Times, “that her husband ‘finds it entertaining to brake hard’ in front of the self-driving vans, and that she herself ‘may have forced them to pull over’ so she could yell at them to get out of their neighborhood.” The reporter revealed that the backlash tensions started to boil “when their 10-year-old son was nearly hit by one of the vehicles while he was playing in a nearby cul-de-sac.”

Rethink's Baxter robot was the subject of a user backlash because of design limitations.

The deliberate sabotaging by the O’Polkas could be indicative of the attitudes of millions of citizens who feel ignored by the speed of innovation. Deployments that run oblivious to this view, relying solely on the excitement of investors and insiders, ultimately face backlash when customers flock to competitors.

In the cobot world, the early battle between Rethink Robotics and Universal Robots (UR) is probably one of the most high-flying examples of tone-deaf invention by engineers. Rethink’s eventual demise was a classic case of form over function with a lot of hype sprinkled on top.

Rodney Brooks‘ collaborative robotics enterprise raised close to $150 million in its short decade-long existence. The startup rode the coattails of fame of its co-founder, who is often referred to as the godfather of robotics, before ever delivering a product.

Dedicated Rethink distributor, Dan O’Brien, recalled, “I’ve never seen a product get so much publicity. I fell in love with Rethink in 2010.” Its first product, Baxter, released in 2012 and promised to bring safety, productivity, and a little whimsy to the factory floor. The robot stood at around six feet tall with two bright colored red arms that were connected to an animated screen complete with friendly facial expressions.

At the same time, Rethink’s robots were not able to perform as advertised in industrial environments, leading to a backlash and slow adoption. The problem stemmed from Brooks’ insistence in licensing their actuation technology, “Series Elastic Actuators (SEAs),” from former employer MIT instead of embracing the leading actuator, Harmonic Drive, for its mobility. Users demanded greater exactness in their machines that competitors such as UR, a Harmonic customer, took the helm in delivering.

Universal Robots' cobot arms don't have the problems that led to a backlash against Rethink's robots

Universal Robots’ cobots perform better than those of the late Rethink Robotics.

The backlash to Baxter is best illustrated by the comments of Steve Leach, president of Numatic Engineering, an automation integrator. In 2010, Leach hoped that Rethink could be “the iPhone of the industrial automation world.”

However, “Baxter wasn’t accurate or smooth,” said Leach, who was dismayed after seeing the final product. “After customers watched the demo, they lost interest because Baxter was not able to meet their needs.”

“We signed on early, a month before Baxter was released, and thought the software and mechanics would be refined. But they were not,” sighed Leach. In the six years since Baxter’s disappointing launch Rethink did little to address the SEAs problem. Most of the 1,000 Baxters sold by Rethink were delivered to academia, not the commercial industry.

By contrast, Universal booked more 27,000 robots since its founding in 2005. Even Leach, who spent a year passionately trying to sell a single Baxter unit, switched to UR and sold his first one within a week. Leach elaborated, “From the ground up, UR’s firmware and hardware were specifically developed for industrial applications and met the expectations of those customers. That’s really where Rethink missed the mark.”

This garbage can robot seen at CES was designed to be cheap and avoid consumer backlash.

As machines permeate human streets, factories, offices, and homes, building a symbiotic relationship between intended operators and creators is even more critical. Too often, I meet entrepreneurs who demonstrate concepts with little input from potential buyers. This past January, the aisles of CES were littered with such items, but the one above was designed with a potential backlash in mind.

Simplehuman, the product development firm known for its elegantly designed housewares, unveiled a $200 aluminum robot trash can. This is part of a new line of Simplehuman’s own voice-activated products, potentially competing with Amazon Alexa. In the words of its founder, Frank Yang, “Sometimes, it’s just about pre-empting the users’ needs, and including features we think they would appreciate. If they don’t, we can always go back to the drawing board and tweak the product again.”

To understand the innovation ecosystem in the age of hackers join the next RobotLab series on “Cybersecurity & Machines” with John Frankel of ffVC and Guy Franklin of SOSA – February 12th in New York City, seating is limited so RSVP today!

The post Build better robots by listening to customer backlash appeared first on The Robot Report.