Automated system from MIT generates robotic actuators for novel tasks

An automated system developed by MIT researchers designs and 3D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications.

An automated system developed by MIT researchers designs and 3D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. Credit: Subramanian Sundaram

CAMBRIDGE, Mass. — An automated system developed by researchers at the Massachusetts Institute of Technology designs and 3D prints complex robotic actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.

In a paper published in Science Advances, the researchers demonstrated the system by fabricating actuators that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. When it’s activated, it tilts at an angle and displays the famous Edvard Munch painting “The Scream.”

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials.

Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” said first author Subramanian Sundaram, Ph.D. ’18, a former graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

New robotic actuators mimic biology for efficiency

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming.

“You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram said.

Joining Sundaram on the paper were Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the ‘combinatorial explosion’

Robotic actuators are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.

Adding to that complexity, new 3D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high

“What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram said.

The researchers first customized three polymer materials with specific properties they needed to build their robotic actuators: color, magnetization, and rigidity. They ultimately produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed.

Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

MIT robotic actuator

Credit: Subramanian Sundaram

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels.

Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone.

The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram said. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics.

“We’re not yet able to print wings or anything on that scale, or with those materials,” said Sundaram. “But I think this is a first step toward that goal.”

Editor’s note: This article republished with permission from MIT News.

The post Automated system from MIT generates robotic actuators for novel tasks appeared first on The Robot Report.

Cassie bipedal robot a platform for tackling locomotion challenges

Working in the Dynamic Autonomy and Intelligent Robotics lab at the University of Pennsylvania, Michael Posa (right) and graduate student Yu-Ming Chen use Cassie to help develop better algorithms that can help robots move more like people. | Credit: Eric Sucar

What has two legs, no torso, and hangs out in the basement of the University of Pennsylvania’s Towne Building?

It’s Cassie, a dynamic bipedal robot, a recent addition to Michael Posa’s Dynamic Autonomy and Intelligent Robotics (DAIR) Lab. Built by Agility Robotics, a company in Albany, Oregon, Cassie offers Posa and his students the chance to create and test the locomotion algorithms they’re developing on a piece of equipment that’s just as cutting-edge as their ideas.

“We’re really excited to have it. It offers us capabilities that are really unlike anything else on the commercial market,” says Posa, a mechanical engineer in the School of Engineering and Applied Science. “There aren’t many options that exist, and this means that every single lab that wants to do walking research doesn’t have to spend three years building its own robot.”

Having Cassie lets Posa’s lab members spend all their time working to solve the huge challenge of designing algorithms so that robots can walk and navigate across all kinds of terrain and circumstances.

“What we have is a system really designed for dynamic locomotion,” he says. “We get very natural speed in terms of leg motions, like picking up a foot and putting it down somewhere else. For us, it’s a really great system.”

“It offers us capabilities that are really unlike anything else on the commercial market,” Posa says about Cassie. | Credit: Eric Sucar

Why do the legs matter? Because they dramatically expand the possibilities of what a robot can do. “You can imagine how legged robots have a key advantage over wheeled robots in that they are able to go into unstructured environments. They can go over relatively rough terrain, into houses, up a flight of stairs. That’s where a legged robot excels,” Posa says. “This is useful in all kinds of applications, including basic exploration, but also things like disaster recovery and inspection tasks. That’s what’s drawing a lot of industry attention these days.”

Of course, walking over different terrain or up a curb, step, or other incline dramatically increases what a robot has to do to stay upright. Consider what happens when you walk: Bump into something with your elbow, and your body has to reverse itself to avoid knocking it over, as well as stabilize itself to avoid falling in the opposite direction.

Related: Ford package delivery tests combine autonomous vehicles, bipedal robots

A robot has to be told to do all of that – which is where Posa’s algorithms come in, starting from where Cassie’s feet go down as it takes each step.

“Even with just legs, you have to make all these decisions about where you’re going to put your feet,” he says. “It’s one of those decisions that’s really very difficult to handle because everything depends on where and when you’re going to put your feet down and putting that foot down crates an impact: You shift your weight, which changes your balance, and so on.



“This is a discrete event that happens quickly. From a computational standpoint, that’s one of the things we really struggle with—how do we handle these contact events?”

Then there’s the issue of how to model what you want to tell the robot to do. Simple modeling considers the robot as a point moving in space rather than, for example, a machine with six joints in its leg. But of course, the robot isn’t a point, and working with those models means sacrificing capability. Posa’s lab is trying to build more sophisticated models that, in turn, make the robot move more smoothly.

“We’re interested in the sort of middle ground, this Goldilocks regime between ‘this robot has 12 different motors’ and ‘this robot is a point in space,'” he says.

Related: 2019 the Year of Legged Robots

Cassie’s predecessor was called ATRIAS, an acronym for “assume the robot is a sphere.” ATRIAS allowed for more sophisticated models and more ability to command the robot, but was still too simple, Posa says. “The real robot is always different than a point or sphere. The question is where should our models live on this spectrum, from very simple to very complicated?”

Two graduate students in the DAIR Lab have been working on the algorithms, testing them in simulation and then, finally, on Cassie. Most of the work is virtual, since Cassie is really for testing the pieces that pass the simulation test.

“You write the code there,” says Posa, gesturing at a computer across the lab, “and then you flip a switch and you’re running it with the real robot. In general, if it doesn’t work in the simulator, it’s not going to work in the real world.”

Graduate students, including Chen (left), work on designing new algorithms and running computer simulations before testing them on Cassie. | Credit: Eric Sucar

On the computer, the researchers can take more risks, says graduate student Yu-Ming Chen. “We don’t break the robot in simulation,” he says, chuckling.

So what happens when you take these legs for a spin? The basic operation involves a marching type of step, as Cassie’s metal feet clang against the floor. But even as the robot makes these simple motions, it’s easy to see how the joints and parts work together to make a realistic-looking facsimile of a legged body from the waist down.

With Cassie as a platform, Posa says he’s excited to see how his team can push locomotion research forward.

“We want to design algorithms to enable robots to interact with the world in a safe and productive fashion,” he says. “We want [the robot] to walk in a way that is efficient, energetically, so it can travel long distances, and walk in a way that’s safe for both the robot and the environment.”

Editor’s Note: This article was republished from the University of Pennsylvania.

MIT ‘walking motor’ could help robots assemble complex structures


Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an international robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

New approach to building robots

Their work offers an alternative to today’s approaches to constructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Robots working in confined spaces

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”


Editor’s Note: This article was republished from MIT News.


The post MIT ‘walking motor’ could help robots assemble complex structures appeared first on The Robot Report.

Prescribing a Robot ‘Intervention’

 

According to Australian Centre for Robotic Vision Research’s Nicole Robinson, research studies on the impact of social robot interventions there have been few and unsophisticated. There is good news… the results are encouraging.

As our world struggles with mental health and substance use disorders affecting 970 million people and counting (according to 2017 figures), the time is ripe for meaningful social robot ‘interventions’. That’s the call by Australian Centre for Robotic Vision Research Fellow Nicole Robinson – a roboticist with expertise in psychology and health – as detailed in the Journal of Medical Internet Research (JMIR).

Having led Australia’s first study into the positive impact of social robot interventions on eating habits (in 2017), Robinson and the Centre’s social robotics team believes it is time to focus on weightier health and wellbeing issues, including depression, drug and alcohol abuse, and eating disorders.

Global Trials To Date
In the recently published JMIR paper, A Systematic Review of Randomised Controlled Trials on Psychosocial Health Interventions by Social Robots, Robinson reveals global trials to date are ‘very few and unsophisticated’. Only 27 global trials met inclusion criteria for psychosocial health interventions; many of them lacked a follow-up period; targeted small sample groups (<100 participants); and limited research to contexts of child health, autism spectrum disorder (ASD) and older adults.

Of concern, no randomised controlled trials have yet involved adolescents or young adults at a time when the World Health Organisation (WHO) estimates one in six adolescents (aged 10-19) are affected by mental health disorders. According to the agency, half of all mental health conditions start by 14 years of age, but most cases are undetected and untreated.

WHO warns: “The consequences of not addressing adolescent mental health conditions extend to adulthood, impairing both physical and mental health and limiting opportunities to lead fulfilling lives…”

In good news for the Centre’s research into social robot interventions, WHO pushes for the adoption of multi-level and varied prevention and promotion programs including via digital platforms (Read more HERE).

A  Therapeutic Alliance
Despite limited amount of global research conducted on psychosocial health interventions by social robots, Robinson believes the results are nevertheless encouraging. They indicate a ‘therapeutic alliance’ between robots and humans could lead to positive effects similar to the use of digital interventions for managing anxiety, depression and alcohol use.

“The beauty of social robot intervention is that they could help to side-step potential negative effects of face-to-face therapy with a human health practitioner such as perceived judgement or stigma,” said Robinson, who has used Nao and SoftBank’s Pepper robots in her research at the Centre.

“Robots can help support a self-guided program or health service by interacting with people to help keep them on track with their health goals.

“Our research is not about replacing healthcare professionals, but identifying treatment gaps where social robots can effectively assist by engaging patients to discuss sensitive topics and identify problems that may require the attention of a health practitioner.”

In the JMIR paper, published last month, Robinson puts out a timely global call for research on social robot interventions to transition from exploratory investigations to large-scale controlled trials with sophisticated methodology.

At the Australian Centre for Robotic Vision’s QUT headquarters, she’s helping to lay the groundwork. The Centre’s research, sponsored by the Queensland Government, is assessing the capabilities of social robots and using SoftBank Robotics’ Pepper robot to explore applications where social robots can deliver value beyond their novelty appeal.

Social Robot Trials
In 2018, the Centre’s social robotics team initiated a set of trials involving Pepper robots to measure the unique value of social robots in one-to-one interactions in healthcare. After supporting an Australia-first trial of a Pepper robot at Townsville Hospital and Health Service, the Centre’s team has placed Pepper into a QUT Health Clinic at Kelvin Grove Campus.

The three-month study to June 2019 involves Pepper delivering a brief health assessment and providing customised feedback that can be taken to a health practitioner to discuss issues around physical activity, dietary intake, alcohol use and smoking. Members of the public who are registered as patients at the QUT Health Clinic are invited to take part in this trial.

In a separate online trial, the Centre’s social robotics team is assessing people’s attitudes to social robots and their willingness to engage with and discuss different topics with a robot or human as the conversation partner.

For more information on the Australian Centre for Robotic Vision’s work creating robots able to see and understand like humans, download our 2018 Annual Report.


Editor’s Note: This article was republished with permission from The Australian Centre for Robotic Vision. The original article can be found HERE.


The post Prescribing a Robot ‘Intervention’ appeared first on The Robot Report.

Self-driving cars may not be best for older drivers, says Newcastle University study

Self-driving cars may not be best for older drivers, says Newcastle University study

VOICE member Ian Fairclough and study lead Dr. Shuo Li in test of older drivers. Source: Newcastle University

With more people living longer, driving is becoming increasingly important in later life, helping older drivers to stay independent, socially connected and mobile.

But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.

“In the U.K., older drivers are tending to drive more often and over longer distances, but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations,” explained Dr Shuo Li, an expert in intelligent transport systems at Newcastle University.

“Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we’ve seen older drivers take to minimize risk. But this can be quite limiting for people.”

Potential game-changer

Self-driving cars are seen as a potential game-changer for this age group, Li noted. Fully automated, they are unlikely to require a license and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.

But it’s not as clear-cut as it seems, said Li.

“There are several levels of automation, ranging from zero where the driver has complete control, through to Level 5, where the car is in charge,” he explained. “We’re some way-off Level 5, but Level 3 may be a trend just around the corner.  This will allow the driver to be completely disengaged — they can sit back and watch a film, eat, even talk on the phone.”

“But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds,” he added. “For younger people that switch between tasks is quite easy, but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor.”

Newcastle University DriveLAB tests older drivers

Led by Newcastle University’s Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take back control of an automated car in different scenarios and also the quality of their driving in these different situations.

Using the University’s state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).

They experienced automated driving for a short period and were then asked to “take back” control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.

The starting point in all situations was “total disengagement” — turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.

The time taken to regain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), “active input” such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).

“In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers,” said Li. “Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph, that means our older drivers would have needed an extra 35m warning distance — that’s equivalent to the length of 10 cars.

“But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident,” he said.

In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups.

In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.

Newcastle University DriveLab

VOICE member Pat Wilkinson. Source: Newcastle University

Designing automated cars of the future

The research team also explored older drivers’ opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.

Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what’s happening on the road and where they are even when they are busy with another activity.

The research team are now looking at how the vehicles can be improved to overcome some of these problems and better support older drivers when the automated cars hit our roads.

“I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” said Newcastle University Prof. Phil Blythe, who led the study and is chief scientific advisor for the U.K. Department for Transport. “The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.”

“Two pillars of the Government’s Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge,” he added. “Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties.”

“It is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” — Newcastle University Prof. Phil Blythe

Case studies of older drivers

Pat Wilkinson, who lives in Rowland’s Gill, County Durham, has been supporting the DriveLAB research for almost nine years.

Now 74, the former Magistrate said it’s interesting to see how technology is changing and gradually taking the control – and responsibility – away from the driver.

“I’m not really a fan of the cars you don’t have to drive,” she said. “As we get older, our reactions slow, but I think for the young ones, chatting on their phones or looking at the iPad, you just couldn’t react quickly if you needed to either. I think it’s an accident waiting to happen, whatever age you are.”

“And I enjoy driving – I think I’d miss that,” Wilkinson said. “I’ve driven since I first passed my test in my 20s, and I hope I can keep on doing so for a long time.

“I don’t think fully driverless cars will become the norm, but I do think the technology will take over more,” she said. “I think studies like this that help to make it as safe as possible are really important.”

Ian Fairclough, 77 from Gateshead, added: “When you’re older and the body starts to give up on you, a car means you can still have adventures and keep yourself active.”

“I passed my test at 22 and was in the army for 25 years, driving all sorts of vehicles in all terrains and climates,” he recalled. “Now I avoid bad weather, early mornings when the roads are busy and late at night when it’s dark, so it was really interesting to take part in this study and see how the technology is developing and what cars might be like a few years from now.”

Fairclough took part in two of the studies in the VR simulator and said it was difficult to switch your attention quickly from one task to another.

“It feels very strange to be a passenger one minute and the driver the next,” he said. “But I do like my Toyota Yaris. It’s simple, clear and practical.  I think perhaps you can have too many buttons.”

Wilkinson and Fairclough became involved in the project through VOICE, a group of volunteers working together with researchers and businesses to identify the needs of older people and develop solutions for a healthier, longer life.

The post Self-driving cars may not be best for older drivers, says Newcastle University study appeared first on The Robot Report.

4 Overheating solutions for commercial robotics

4 Overheating solutions for commercial robotics

Stanford University researchers have developed a lithium-ion battery that shuts down before overheating. Source: Stanford University

Overheating can become a severe problem for robots. Excessive temperatures can damage internal systems or, in the most extreme cases, cause fires. Commercial robots that regularly get too hot can also cost precious time, as operators are forced to shut down and restart the machines during a given shift.

Fortunately, robotics designers have several options for keeping industrial robots cool and enabling workflows to progress smoothly. Here are four examples of technologies that could keep robots at the right temperature.

1. Lithium-ion batteries that automatically shut off and restart

Many robots, especially mobile platforms for factories or warehouses, have lithium-ion battery packs. Such batteries are popular and widely available, but they’re also prone to overheating and potentially exploding.

Researchers at Stanford University engineered a battery with a special coating that stops it from conducting electricity if it gets too hot. As the heat level climbed, the layer expanded, causing a functional change that made the battery itself no longer conducive. However, once cool, it starts providing power as usual.

The research team did not specifically test their battery coating in robots powered by lithium-ion batteries. However, it noted that the work has practical merit for a variety of use cases due to how it’s possible to change the heat level that causes the battery to shut down.

For example, if a robot has extremely sensitive internal parts, users would likely want it to shut down at a lower temperature than when using it in a more tolerant machine.

2. Sensors that measure a robot’s ‘health’ to avoid overheating

Commercial robots often allow corporations to achieve higher, more consistent performance levels than would be possible with human effort alone. Industrial-grade robots don’t need rest breaks, but unlike humans who might speak up if they feel unwell and can’t complete a shift, robots can’t necessarily notify operators that something’s wrong.

However, University of Saarland researchers have devised a method that subjects industrial machines to the equivalent of a continuous medical checkup. Similar to how consumer health trackers measure things like a person’s heart rate and activity levels and give them opportunities to share these metrics with a physician, a team aims to do the same with industrial machinery.

Continual robot monitoring

A research team at Saarland University has developed an early warning system for industrial assembly, handling, and packaging processes. Research assistants Nikolai Helwig (left) and Tizian Schneider test the smart condition monitoring system on an electromechanical cylinder. Credit: Oliver Dietze, Saarland University

It should be possible to see numerous warning signs before a robot gets too hot. The scientists explained that they use special sensors that fit inside the machines and can interact with one another as well as a robot’s existing process sensors. The sensors collect baseline data. They can also recognize patterns that could indicate a failing part — such as that the machine gets hot after only a few minutes of operating.

That means the sensors could warn plant operators of immediate issues, like when a robot requires an emergency shutdown because of overheating. It could also help managers understand if certain processes make the robots more likely to overheat than others. Thanks to the constant data these sensors provide, human workers overseeing the robots should have the knowledge they need to intervene before a catastrophe occurs.

Manufacturers already use predictive analytics to determine when to perform maintenance. This approach could provide even more benefits because it goes beyond maintenance alerts and warns if robots stray from their usual operating conditions due because of overheating or other reasons that need further investigation.

3. Thermally conductive rubber

When engineers design robots or work in the power electronics sector, heat dissipation technologies are almost always among the things to consider before the product becomes functional. For example, even in a device that’s 95% efficient, the remaining 5% gets converted into heat that needs to escape.

Power electronics overheating roadmap

Source: Advanced Cooling Technologies

Pumped liquid, extruded heatsinks, and vapor chambers are some of the available methods for keeping power electronics cool. Returning to commercial robotics specifically, Carnegie Mellon University scientists have developed a material that aids in heat management for soft robots. They said their creation — nicknamed “thubber” — combines elasticity with high heat conductivity.

CMU thubber for overheating

A nano-CT scan of “thubber” showing the liquid-metal microdroplets inside the rubber material. Source: Carnegie Mellon University

The material stretches to more than six times its initial length, and that’s impressive in itself. However, the CMIU researchers also mentioned that the blend of high heat conductivity and the flexibility of the material are crucial for facilitating dissipation. They pointed out that past technologies required attaching high-powered devices to inflexible mounts, but they now envision creating these from the thubber.

Then, the respective devices, whether bendable robots or folding electronics, could be more versatile and stay cool as they function.

4. Liquid cooling and fan systems

Many of the cooling technologies used in industrial robots happen internally, so users don’t see them working, but they know everything is functioning as it should since the machine stays at a desirable temperature. Plus, there are some robots for which heat reduction is exceptionally important due to the tasks they assume. Firefighting robots are prime examples.

One of them, called Colossus, recently helped put out the Notre Dame fire in Paris. It has an onboard smoke ventilation system that likely has a heat-management component, too. Purchasers can also pay more to get a smoke-extracting fan. It’s an example of a mobile robot that uses lithium-ion batteries, making it a potential candidate for the first technology on the list.

There’s another firefighting robot called the Thermite, and it uses both water and fans to stay cool. For example, the robot can pump out 500 gallons of water per minute to control a blaze, but a portion of that liquid goes through the machine’s internal “veins” first to keep it from overheating.

In addition, part of Thermite converts into a sprinkler system, and onboard fans help recycle the associated mist and cool the machine’s components.

An array of overheating options

Robots are increasingly tackling jobs that are too dangerous for humans. As these examples show, they’re up to the task as long as the engineers working to develop those robots remain aware of internal cooling needs during the design phase.

This list shows that engineers aren’t afraid to pursue creative solutions as they look for ways to avoid overheating. Although many of the technologies described here are not yet available for people to purchase, it’s worthwhile for developers to stay abreast of the ongoing work. The attempts seem promising, and even cooling efforts that aren’t ready for mainstream use could lead to overall progress.

The post 4 Overheating solutions for commercial robotics appeared first on The Robot Report.

Vegebot robot applies machine learning to harvest lettuce

Vegebot, a vegetable-picking robot, uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop.

A team at the University of Cambridge initially trained Vegebot to recognize and harvest iceberg lettuce in the laboratory. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, a local fruit and vegetable co-operative.

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The researchers published their results in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the U.K., iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot.” — Josie Hughes, University of Cambridge report co-author

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.

The Vegebot first identifies the “target” crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested. Finally, it cuts the lettuce from the rest of the plant without crushing it so that it is “supermarket ready.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

Vegebot designed for lettuce-picking challenge

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image. Then for each lettuce, the robot classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

Vegebot in the field

Vegebot uses machine vision to identify heads of iceberg lettuce. Credit: University of Cambridge

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognize healthy lettuce in the lab, the team then trained it in the field, in a variety of weather conditions, on thousands of real lettuce heads.

A second camera on the Vegebot is positioned near the cutting blade, and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In the future, robotic harvesters could help address problems with labor shortages in agriculture. They could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded.

However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6 million ($8.26 million U.S.) for the new CDT, which will support at least 50 Ph.D. students.

The post Vegebot robot applies machine learning to harvest lettuce appeared first on The Robot Report.

Programmable soft actuators show potential of soft robotics at TU Delft

Researchers at the Delft University of Technology in the Netherlands have developed highly programmable soft actuators that, similar to the human hand, combine soft and hard materials to perform complex movements. These materials have great potential for soft robots that can safely and effectively interact with humans and other delicate objects, said the TU Delft scientists.

“Robots are usually big and heavy. But you also want robots that can act delicately, for instance, when handling soft tissue inside the human body. The field that studies this issue, soft robotics, is now really taking off,” said Prof. Amir Zadpoor, who supervised the research presented the July 8 issue of Materials Horizons.

“What you really want is something resembling the features of the human hand including soft touch, quick yet accurate movements, and power,” he said. “And that’s what our soft 3D-printed programmable materials strive to achieve.”

Tunability

Owing to their soft touch, soft robotics can safely and effectively interact with humans and other delicate objects. Soft programmable mechanisms are required to power this new generation of robots. Flexible mechanical metamaterials, working on the basis of mechanical instability, offer unprecedented functionalities programmed into their architected fabric that make them potentially very promising as soft mechanisms, said the TU Delft researchers.

“However, the tunability of the mechanical metamaterials proposed so far have been very limited,” said first author Shahram Janbaz.

Programmable soft actuators

“We now present some new designs of ultra-programmable mechanical metamaterials, where not only the actuation force and amplitude, but also the actuation mode could be selected and tuned within a very wide range,” explained Janbaz. “We also demonstrate some examples of how these soft actuators could be used in robotics, for instance as a force switch, kinematic controllers, and a pick-and-place end-effector.”

Soft actuators from TU Delft

A conventional robotic arm is modified using the developed soft actuators to provide soft touch during pick-and-place tasks. Source: TU Delft

Buckling

“The function is already incorporated in the material,” Zadpoor explained. “Therefore, we had to look deeper at the phenomenon of buckling. This was once considered the epitome of design failure, but has been harnessed during the last few years to develop mechanical metamaterials with advanced functionalities.”

“Soft robotics in general and soft actuators in particular could greatly benefit from such designer materials,” he added. “Unlocking the great potential of buckling-driven materials is, however, contingent on resolving the main limitation of the designs presented to date, namely the limited range of their programmability. We were able to calculate and predict higher modes of buckling and make the material predisposed to these higher modes.”

3D printing

“So, we present multi-material buckling-driven metamaterials with high levels of programmability,” said Janbaz. “We combined rational design approaches based on predictive computational models with advanced multi-material additive manufacturing techniques to 3D print cellular materials with arbitrary distributions of soft and hard materials in the central and corner parts of their unit cells.”

“Using the geometry and spatial distribution of material properties as the main design parameters, we developed soft mechanical metamaterials behaving as mechanisms whose actuation force and actuation amplitude could be adjusted,” he said.

Editor’s note: This article republished from TU Delft.

The post Programmable soft actuators show potential of soft robotics at TU Delft appeared first on The Robot Report.

KIST researchers teach robot to trap a ball without coding

KIST teaching

KIST’s research shows that robots can be intuitively taught to be flexible by humans rather than through numerical calculation or programming the robot’s movements. Credit: KIST

The Center for Intelligent & Interactive Robotics at the Korea Institute of Science and Technology, or KIST, said that a team led by Dr. Kee-hoon Kim has developed a way of teaching “impedance-controlled robots” through human demonstrations. It uses surface electromyograms of muscles and succeeded in teaching a robot to trap a dropped ball like a soccer player.

A surface electromyogram (sEMG) is an electric signal produced during muscle activation that can be picked up on the surface of the skin, said KIST, which is led by Pres. Byung-gwon Lee.

Recently developed impedance-controlled robots have opened up a new era of robotics based on the natural elasticity of human muscles and joints, which conventional rigid robots lack. Robots with flexible joints are expected to be able to run, jump hurdles and play sports like humans. However, the technology required to teach such robots to move in this manner has been unavailable until recently.

KIST uses human muscle signals to teach robots how to move

The KIST research team claimed to be the first in the world to develop a way of teaching new movements to impedance-controlled robots using human muscle signals. With this technology, which detects not only human movements but also muscle contractions through sEMG, it’s possible for robots to imitate movements based on human demonstrations.

Dr. Kee-hoon Kim’s team said it succeeded in using sEMG to teach a robot to quickly and adroitly trap a rapidly falling ball before it comes into contact with a solid surface or bounces too far to reach — similar to the skills employed by soccer players.

SEMG sensors were attached to a man’s arm, allowing him to simultaneously control the location and flexibility of the robot’s rapid upward and downward movements. The man then “taught” the robot how to trap a rapidly falling ball by giving a personal demonstration. After learning the movement, the robot was able to skillfully trap a dropped ball without any external assistance.

KIST movements

sEMG sensors attached to a man’s arm, allowed him to control the location and flexibility of a robot’s rapid movements. Source: KIST

This research outcome, which shows that robots can be intuitively taught to be flexible by humans, has attracted much attention, as it was not accomplished through numerical calculation or programming of the robot’s movements. This study is expected to help advance the study of interactions between humans and robots, bringing us one step closer to a world in which robots are an integral part of our daily lives.

Kim said, “The outcome of this research, which focuses on teaching human skills to robots, is an important achievement in the study of interactions between humans and robots.”

Argo AI, CMU developing autonomous vehicle research center


Argo AI

Argo AI autonomous vehicle. | Credit: Argo AI

Argo AI, a Pittsburgh-based autonomous vehicle company, has donated $15 million to Carnegie Mellon University (CMU) to fund a new research center. The Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research will “pursue advanced research projects to help overcome hurdles to enabling self-driving vehicles to operate in a wide variety of real-world conditions, such as winter weather or construction zones.”

Argo was founded in 2016 by a team with ties to CMU (more on that later). The five-year partnership between Argo and CMU will fund research into advanced perception and next-generation decision-making algorithms for autonomous vehicles. The center’s research will address a number of technical topics, including smart sensor fusion, 3D scene understanding, urban scene simulation, map-based perception, imitation and reinforcement learning, behavioral prediction and robust validation of software.

“We are thrilled to deepen our partnership with Argo AI to shape the future of self-driving technologies,” CMU President Farnam Jahanian said. “This investment allows our researchers to continue to lead at the nexus of technology and society, and to solve society’s most pressing problems.”

In February 2017, Ford announced that it was investing $1 billion over five years in Argo, combining Ford’s autonomous vehicle development expertise with Argo AI’s robotics experience. Earlier this month, Argo unveiled its third-generation test vehicle, a modified Ford Fusion Hybrid. Argo is now testing its autonomous vehicles in Detroit, Miami, Palo Alto, and Washington, DC.

Argo last week released its HD maps dataset, Argoverse. Argo said this will help the research community “compare the performance of different (machine learning – deep net) approaches to solve the same problem.



“Argo AI, Pittsburgh and the entire autonomous vehicle industry have benefited from Carnegie Mellon’s leadership. It’s an honor to support development of the next-generation of leaders and help unlock the full potential of autonomous vehicle technology,” said Bryan Salesky, CEO and co-founder of Argo AI. “CMU and now Argo AI are two big reasons why Pittsburgh will remain the center of the universe for self-driving technology.”

Deva Ramanan, an associate professor in the CMU Robotics Institute, who also serves as machine learning lead at Argo AI, will be the center’s principal investigator. The center’s research will involve faculty members and students from across CMU. The center will give students access to the fleet-scale data sets, vehicles and large-scale infrastructure that are crucial for advancing self-driving technologies and that otherwise would be difficult to obtain.

CMU’s other autonomous vehicle partnerships

This isn’t the first autonomous vehicle company to see potential in CMU. In addition to Argo AI, CMU performs related research supported by General Motors, Uber and other transportation companies.

Its partnership with Uber is perhaps CMU’s most high-profile autonomous vehicle partnership, and it’s for all the wrong reasons. In 2015, Uber announced a strategic partnership with CMU that included the creation of a research lab near campus aimed at kick starting autonomous vehicle development.

But that relationship ended up gutting CMU’s National Robotics Engineering Center (NREC). More than a dozen CMU researchers, including the NREC’s director, left to work at the Uber Advanced Technologies Center.


Argo’s connection to CMU

As mentioned earlier, Argo’s co-founders have strong ties to CMU. Argo Co-founder and president Peter Rander earned his masters and PhD degrees at CMU. Salesky graduated from the University of Pittsburgh in 2002, but worked at the NREC for a number of years, managing a portfolio of the center’s largest commercial programs that included autonomous mining trucks for Caterpillar. In 2007, Salesky led software engineering for Tartan Racing, CMU’s winning entry in the DARPA Urban Challenge.

Salesky departed NREC and joined the Google self-driving car team in 2011 to continue the push toward making self-driving cars a reality. While at Google, Bryan he responsible for the development and manufacture of their hardware portfolio, which included self-driving sensors, computers and several vehicle development programs.

Brett Browning, Argo’s VP of Robotics, received his Ph.D. (2000) and bachelor’s degree in electrical engineering and science from the University of Queensland. He was a senior faculty member at the NREC for 12-plus years, pursuing field robotics research in defense, oil and gas, mining and automotive applications.