Microrobots activated by laser pulses could deliver medicine to tumors

Targeting medical treatment to an ailing body part is a practice as old as medicine itself. Drops go into itchy eyes. A broken arm goes into a cast. But often what ails us is inside the body and is not so easy to reach. In such cases, a treatment like surgery or chemotherapy might be called for. A pair of researchers in Caltech’s Division of Engineering and Applied Science are working on an entirely new form of treatment — microrobots that can deliver drugs to specific spots inside the body while being monitored and controlled from outside the body.

“The microrobot concept is really cool because you can get micromachinery right to where you need it,” said Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering at the California Institute of Technology. “It could be drug delivery, or a predesigned microsurgery.”

The microrobots are a joint research project of Wang and Wei Gao, assistant professor of medical engineering, and are intended for treating tumors in the digestive tract.

Developing jet-powered microrobots

The microrobots consist of microscopic spheres of magnesium metal coated with thin layers of gold and parylene, a polymer that resists digestion. The layers leave a circular portion of the sphere uncovered, kind of like a porthole. The uncovered portion of the magnesium reacts with the fluids in the digestive tract, generating small bubbles. The stream of bubbles acts like a jet and propels the sphere forward until it collides with nearby tissue.

On their own, magnesium spherical microrobots that can zoom around might be interesting, but they are not especially useful. To turn them from a novelty into a vehicle for delivering medication, Wang and Gao made some modifications to them.

First, a layer of medication is sandwiched between an individual microsphere and its parylene coat. Then, to protect the microrobots from the harsh environment of the stomach, they are enveloped in microcapsules made of paraffin wax.

Laser-guided delivery

At this stage, the spheres are capable of carrying drugs, but still lack the crucial ability to deliver them to a desired location. For that, Wang and Gao use photoacoustic computed tomography (PACT), a technique developed by Wang that uses pulses of infrared laser light.

The infrared laser light diffuses through tissues and is absorbed by oxygen-carrying hemoglobin molecules in red blood cells, causing the molecules to vibrate ultrasonically. Those ultrasonic vibrations are picked up by sensors pressed against the skin. The data from those sensors is used to create images of the internal structures of the body.

Previously, Wang has shown that variations of PACT can be used to identify breast tumors, or even individual cancer cells. With respect to the microrobots, the technique has two jobs. The first is imaging. By using PACT, the researchers can find tumors in the digestive tract and also track the location of the microrobots, which show up strongly in the PACT images.

Microrobots activated by laser pulses could deliver medicine to tumors

Microrobots activated by lasers and powered by magnesium jets could deliver medicine within the human body. Source: Caltech

Once the microrobots arrive in the vicinity of the tumor, a high-power continuous-wave near-infrared laser beam is used to activate them. Because the microrobots absorb the infrared light so strongly, they briefly heat up, melting the wax capsule surrounding them, and exposing them to digestive fluids.

At that point, the microrobots’ bubble jets activate, and the microrobots begin swarming. The jets are not steerable, so the technique is sort of a shotgun approach — the microrobots will not all hit the targeted area, but many will. When they do, they stick to the surface and begin releasing their medication payload.

“These micromotors can penetrate the mucus of the digestive tract and stay there for a long time. This improves medicine delivery,” Gao says. “But because they’re made of magnesium, they’re biocompatible and biodegradable.”

Pushing the concept

Tests in animal models show that the microrobots perform as intended, but Gao and Wang say they are planning to continue pushing the research forward.

“We demonstrated the concept that you can reach the diseased area and activate the microrobots,” Gao says. “The next step is evaluating the therapeutic effect of them.”

Gao also says he would like to develop variations of the microrobots that can operate in other parts of the body, and with different types of propulsion systems.

Wang says his goal is to improve how his PACT system interacts with the microrobots. The infrared laser light it uses has some difficulty reaching into deeper parts of the body, but he says it should be possible to develop a system that can penetrate further.

The paper describing the microrobot research, titled, “A microrobotic system guided by photoacoustic tomography for targeted navigation in intestines in vivo,” appears in the July 24 issue of Science Robotics. Other co-authors include Zhiguang Wu, Lei Li, Yiran Yang (MS ’18), Yang Li, and So-Yoon Yang of Caltech; and Peng Hu of Washington University in St. Louis. Funding for the research was provided by the National Institutes of Health and Caltech’s Donna and Benjamin M. Rosen Bioengineering Center.

Editor’s note: This article republished from the California Institute of Technology.

The post Microrobots activated by laser pulses could deliver medicine to tumors appeared first on The Robot Report.

ASTM International proposes standards guide, center of excellence for exoskeletons

One of the barriers to more widespread development and adoption of exoskeletons for industrial, medical, and military use has been a lack of standards. ASTM International this month proposed a guide to provide standardized tools to assess and improve the usability and usefulness of exoskeletons and exosuits.

“Exoskeletons and exosuits can open up a world of possibilities, from helping workers perform industrial tasks while not getting overstressed, to helping stroke victims learning to walk again, to helping soldiers carry heavier rucksacks longer distances,” said Kevin Purcell, an ergonomist at the U.S. Army Public Health Center’s Aberdeen Proving Ground. “But if it doesn’t help you perform your task and/or it’s hard to use, it won’t get used.”

He added that the guide will incorporate ways to understand the attributes of exoskeletons, as well as observation methods and questionnaires to help assess an exoskeleton’s performance and safety.

“The biggest challenge in creating this standard is that exoskeletons change greatly depending on the task the exoskeleton is designed to help,” said Purcell. “For instance, an industrial exoskeleton is a totally different design from one used for medical rehabilitation. The proposed standard will need to cover all types and industries.”

According to Purcell, industrial, medical rehabilitation, and defense users will benefit most from the proposed standard, as will exoskeleton manufacturers and regulatory bodies.

The F48 committee of ASTM International, previously known as he American Society for Testing and Materials, was formed in 2017. It is currently working on the proposed exoskeleton and exosuit standard, WK68719. Six subcommittees include about 150 members, including startups, government agencies, and enterprises such as Boeing and BMW.

ASTM publishes first standards

In May, ASTM International published its first two standards documents, which are intended to provide consensus terminology (F3323) and set forth basic labeling and other informational requirements (F3358). The standards are available for purchase.

“Exoskeletons embody the technological promise of empowering humans to be all they can be,” said F48 committee member William Billotte, a physical scientist at the U.S. National Institute of Standards and Technology (NIST). “We want to make sure that labels and product information are clear, so that exoskeletons fit people properly, so that they function safely and effectively, and so that people can get the most from these innovative products.”

The committee is working on several proposed standards and welcomes more participation from members of the exoskeleton community. For example, Billotte noted that the committee seeks experts in cybersecurity due to the growing need to secure data, controls, and biometrics in many exoskeletons.

ASTM proposes standards guide, center of excellence for exoskeletons

An exoskeleton vest at a BMW plant in in Spartanburg, S.C. Source: BMW

Call for an exoskeleton center of excellence

Last month, ASTM International called for proposals for an “Exo Technologies Center of Excellence.” The winner would receive up to $250,000 per year for up to five years. Full proposals are due today, and the winner will be announced in September, said ASTM.

“Now is the right time to create a hub of collaboration among startups, companies, and other entities that are exploring how exoskeletons could support factory workers, patients, the military, and many other people,” stated ASTM International President Katharine Morgan. “We look forward to this new center serving as a catalyst for game-changing R&D, standardization, related training, partnerships, and other efforts that help the world benefit from this exciting new technology.”

The center of excellence is intended to fill knowledge gaps, provide a global hub for education and a neutral forum to discuss common challenges, and provide a library of community resources. It should also coordinate global links among stakeholders, said ASTM.

West Conshohocken, Pa.-based ASTM International said it meets World Trade Organization (WTO) principles for developing international standards. The organization’s standards are used globally in research and development, product testing, quality systems, commercial transactions, and more.

The post ASTM International proposes standards guide, center of excellence for exoskeletons appeared first on The Robot Report.

COAST Autonomous to deploy first self-driving vehicles at rail yard

PASADENA, Calif. — COAST Autonomous today announced that Harbor Rail Services of California has selected it to deploy self-driving vehicles at the Kinney County Railport in Texas.

This groundbreaking collaboration is the first deployment of self-driving vehicles at a U.S. rail yard, said the companies. Harbor Rail and COAST teams have identified a number of areas where autonomous vehicles can add value, including staff transportation, delivery of supplies and equipment, perimeter security, and lawn mowing.

COAST Autonomous is a software and technology company focused on delivering autonomous vehicle (AV) solutions at appropriate speeds for urban and campus environments. COAST said its mission is to build community by connecting people with mobility solutions that put pedestrians first and give cities back to people.

COAST has developed a full stack of AV software that includes mapping and localization, robotics and artificial intelligence, fleet management and supervision systems. Partnering with proven manufacturers, COAST said it can provide a variety of vehicles equipped with its software to offer Mobility-as-a-Service (MaaS) to cities, theme parks, campuses, airports, and other urban environments.

The company said its team has experience and expertise in all aspects of implementing and operating AV fleets while prioritizing safety and the user experience. Last year, the company conducted a demonstration in New York’s Times Square.

Harbor Rail operates railcar repair facilities across the U.S., including the Kinney County Railport (KCRP), a state-of-the-art railcar repair facility that Harbor Rail operates near the U.S.-Mexico border. KCRP is located on 470 acres of property owned by Union Pacific, the largest railroad in North America. The facility prepares railcars to meet food-grade guidelines, so they are ready to be loaded with packaged beer in Mexico and return to the U.S. with product for distribution.

COAST completes mapping, ready to begin service

COAST has completed 3D mapping of the facility, a first step in any such deployment, and the first self-driving vehicle is expected to begin service at KCRP next month.

“Through the introduction of re-designed trucks, innovative process improvements and adoption of data-driven KPIs [key performance indicators], Harbor Rail successfully reduced railcar rejections rates from 30% to 0.03% in KCRP’s first year of operations,” said Mark Myronowicz, president of Harbor Rail. “However, I am always looking for ways to improve our performance and provide an even better service for our customers.”

COAST Autonomous to deploy first self-driving vehicles at rail yard

Source: COAST Autonomous

“At a large facility like KCRP, we have many functions that I am convinced can be carried out by COAST vehicles,” Myronowicz said. “This will free up additional labor to work on railcars, make us even more efficient, help keep the facility safe at night, and even cut the grass when most of us are asleep. This is a fantastic opportunity to demonstrate Harbor Rail’s commitment to being at the forefront of innovation and customer service.”

“This is an exciting moment for COAST, and we are looking forward to working with Harbor Rail’s industry-leading team,” said David M. Hickey, chairman and CEO of COAST Autonomous. “KCRP is exactly the type of facility that will show how self-driving technology can improve efficiency and cut costs.”

“While the futuristic vision of driverless cars has grabbed most of the headlines, COAST’s team has been focused on useful mobility solutions that can actually be deployed and create tremendous value for private sites, campuses, and urban centers,” he said. “Just as railroads are often the unsung heroes of the logistics industry, COAST’s vehicles will happily go about their jobs unnoticed and quietly change the world.”

The post COAST Autonomous to deploy first self-driving vehicles at rail yard appeared first on The Robot Report.

Perrone Robotics begins pilot of first autonomous public shuttle in Virginia

ALBEMARLE COUNTY, Va. — Perrone Robotics Inc., in partnership with Albemarle County and JAUNT Inc., last week announced that Virginia’s first public autonomous shuttle service began pilot operations in Crozet, Va.

The shuttle service, called AVNU for “Autonomous Vehicle, Neighborhood Use,” is driven by Perrone Robotics’ TONY (TO Navigate You) autonomous shuttle technology applied to a Polaris Industries Inc. GEM shuttle. Perrone Robotics said its Neighborhood Electric Vehicle (NEV) shuttle has industry-leading perception and guidance capabilities and will drive fully autonomously (with safety driver) through county neighborhoods and downtown areas on public roads, navigating vehicle, and pedestrian traffic. The base GEM vehicle meets federal safety standards for vehicles in its class.

“With over 33,000 autonomous miles traveled using our technology, TONY-powered vehicles bring the highest level of autonomy available in the world today to NEV shuttles,” said Paul Perrone, founder/CEO of Perrone Robotics. “We are deploying an AV platform that has been carefully refined since 2003, applied in automotive and industrial autonomy spaces, and now being leveraged to bring last-mile services to communities such as those here in Albemarle County, Va. What we deliver is a platform that operates shuttles autonomously in complex environments with roundabouts, merges, and pedestrian-dense areas.”

The TONY-based AVNU shuttle will offer riders trips within local residential developments, trips to connect neighborhoods, and connections from these areas to the downtown business district.

Polaris GEM partner of Perrone Robotics

Perrone Robotics provides autonomy for Polaris GEM shuttles. Source: Polaris Industries

More routes to come for Perrone AVNU shuttles

After the pilot phase, additional routes will be demonstrate Albemarle County development initiatives such as connector services for satellite parking. They will also connection with JAUNT‘s commuter shuttles, also targeted for autonomous operation with TONY technology.

“We have seen other solutions out there that require extensive manual operation for large portions of the course and very low speeds for traversal of tricky sections,” noted Perrone.  “We surpass these efforts by using our innovative, super-efficient, and completely novel and patented autonomous engine, MAX®, that has over 16 years of engineering and over 33,000 on and off-road miles behind it. We also use AI, but as a tool, not a crutch.”

“It is with great pleasure that we launch the pilot of the next generation of transportation — autonomous neighborhood shuttles — here in Crozet,” said Ann MallekWhite Hall District Supervisor. “Albemarle County is so proud to support our home town company, Perrone Robotics, and work with our transit provider JAUNT, through Smart Mobility Inc., to bring this project to fruition.”

Perrone said that AVNU is electrically powered, so the shuttle is quiet and non-polluting, and it uses solar panels to significantly extend system range. AVNU has been extensively tested by Perrone Robotics, and testing data has been evaluated by Albemarle County and JAUNT prior to launch.

The post Perrone Robotics begins pilot of first autonomous public shuttle in Virginia appeared first on The Robot Report.

Universal Robots launching 50 authorized training centers


Universal Robots is opening 50 Authorized Training Centers, 13 of which will be located in North America. | Credit: Universal Robots

Universal Robots (UR) is launching Authorized Training Centers (ATCs) that offer classes spanning basic to advanced programming of UR cobots. UR is planning 50 fully authorized ATCs worldwide, 13 of which will be in North America. The first few ATCs in the U.S. have already been authorized and are now offered by the following UR sales partners:

  • Advanced Control Solutions in Marietta, Ga.
  • HTE Technologies in St. Louis, Missouri and Lenexa, Kansas
  • Ralph W. Earl Company in Syracuse, New York
  • Applied Controls in Malvern, Pa.

In addition to the ATCs hosted by UR partners, four training centers are also opening at UR’s offices in Ann Arbor, MI, Irving, TX, Garden City, NY, and Irvine, CA.

UR’s certified trainers will conduct training modules that cover a range of core and advanced cobot programming skills, including cobot scripting, industrial communication, and interface usage. Small class sizes with student-centered objectives and hands-on practice with UR robots ensure that participants come away with valuable skills they can apply immediately in their workplace.

For class schedules and more information, visit the UR Academy site. The modules of the ATC program include:

Core: For any user of a UR cobot who has completed the online modules. Covers safety set-up, basic applications and flexible redeployment.

Advanced: For cobot users, technical sales people, and integrators with a practical need to optimize applications or find new ways of deploying UR cobots. Covers scripting, advanced uses of force control and TCP, conveyor tracking and performance review.

Industrial Communication: For users and developers who need to integrate cobots with third-party devices. Covers modbus TCP, FTP server, dashboard server, socket communication, Ethernet/IP and Profinet.

Interfaces: For users and developers who need in-depth knowledge on how to interface with UR cobots using script interfaces. Covers UR scripting, socket communication, client interfaces (port 30001-30003), real time data exchange and XML/RPC.

Service & Troubleshooting: For users, technicians, and engineers wanting/needing a better understanding of the mechanical hardware used by UR cobots, how to diagnose issues and resolve them. Covers the configuration of the cobot arm, controller, and safety system as well as preventative maintenance, system troubleshooting, and replacement of parts.

UR’s certified trainers will conduct training modules that cover a range of core and advanced cobot programming skills, including cobot scripting, industrial communication, and interface usage. | Credit: Universal Robots

“Now, current and potential customers can get in-person training, customizing their specific applications and needs,” said Stuart Shepherd, Regional sales director of Universal Robots’ Americas division. “Not only are our partners excited about this opportunity, they’re virtually lining up to be the next rollout.”

“From a business perspective, being able to offer this type of training also improves our place in the market, ensuring that current and potential customers start to rely on us as automation experts,” said Cale Harbour, Vice President of Product Marketing at Advanced Control Solutions. “As our customers build their knowledge, they can deploy the technology faster and see the benefits to their production – and their bottom line – quicker. It’s a win-win for everybody involved.”

“Using this approach, we’ve expanded our role as supplier to assist with the application process as well,” said Marv Dixon, vice president of business development and sales, HTE Technologies. “The Training Center has also provided us with the perfect scenario in which we can introduce other products that our customers might not have otherwise considered, such as grippers and conveyors. With the Authorized Training Center distinction, we’ve become a resource that our customers can count on for up-to-date, accessible training and support.”

The post Universal Robots launching 50 authorized training centers appeared first on The Robot Report.

Automated system from MIT generates robotic actuators for novel tasks

An automated system developed by MIT researchers designs and 3D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications.

An automated system developed by MIT researchers designs and 3D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. Credit: Subramanian Sundaram

CAMBRIDGE, Mass. — An automated system developed by researchers at the Massachusetts Institute of Technology designs and 3D prints complex robotic actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.

In a paper published in Science Advances, the researchers demonstrated the system by fabricating actuators that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. When it’s activated, it tilts at an angle and displays the famous Edvard Munch painting “The Scream.”

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials.

Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” said first author Subramanian Sundaram, Ph.D. ’18, a former graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

New robotic actuators mimic biology for efficiency

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming.

“You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram said.

Joining Sundaram on the paper were Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the ‘combinatorial explosion’

Robotic actuators are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.

Adding to that complexity, new 3D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high

“What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram said.

The researchers first customized three polymer materials with specific properties they needed to build their robotic actuators: color, magnetization, and rigidity. They ultimately produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed.

Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

MIT robotic actuator

Credit: Subramanian Sundaram

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels.

Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone.

The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram said. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics.

“We’re not yet able to print wings or anything on that scale, or with those materials,” said Sundaram. “But I think this is a first step toward that goal.”

Editor’s note: This article republished with permission from MIT News.

The post Automated system from MIT generates robotic actuators for novel tasks appeared first on The Robot Report.

R-Series actuator from Hebi Robotics is ready for outdoor rigors

PITTSBURGH — What do both summer vacationers and field robots need to do? Get into the water. Hebi Robotics this week announced the availability of its R-Series actuators, which it said can enable engineers “to quickly create custom robots that can be deployed directly in wet, dirty, or outdoor environments.”

Hebi Robotics was founded in 2014 by Carnegie Mellon University professor and robotics pioneer Howie Choset. It makes hardware and software for developers to build robots for their specific applications. It also offers custom development services to make robots “simple, useful, and safe.”

Hebi’s team includes experts in robotics, particularly in motion control. The company has developed robotics tools for academic, aerospace military, sewer inspection, and spaceflight users.

Robots can get wet and dirty with R-Series actuators

The R-Series actuator is built on Hebi’s X-Series platform. It is sealed to IP678 and is designed to be lightweight, compact, and energy-efficient. The series includes three models, the R8-3, which has continuous torque of 3 N-m and weighs 670g; the RB-9, which has continuous torque of 8 N-m and weighs 685g; and the R8-16, which has continuous torque of 16 N-m and weighs 715g.

Hebi's R-Series actuator

The R-Series actuator is sealed for wet and dirty environments. Source: Hebi Robotics

The actuators also include sensors that Hebi said “enable simultaneous control of position, velocity, and torque, as well as three-axis inertial measurement.”

In addition, the R-Series integrates a brushless motor, gear reduction, force sensing, encoders, and controls in a compact package, said Hebi. The actuators can run on 24-48V DC, include internal pressure sensors, and communicate via 100Mbps Ethernet.

On the software side, the R-Series has application programming interfaces (APIs) for MATLAB, the Robot Operating System (ROS), Python, C and C++, and C#, as well as support for Windows, Linux, and OS X.

According to Hebi Robotics, the R-Series actuators will be available this autumn, and it is accepting pre-orders at 10% off the list prices. The actuator costs $4,500, and kits range from $20,000 to $36,170, depending on the number of degrees of freedom of the robotic arm. Customers should inquire about pricing for the hexapod kit.

The post R-Series actuator from Hebi Robotics is ready for outdoor rigors appeared first on The Robot Report.

Self-driving cars may not be best for older drivers, says Newcastle University study

Self-driving cars may not be best for older drivers, says Newcastle University study

VOICE member Ian Fairclough and study lead Dr. Shuo Li in test of older drivers. Source: Newcastle University

With more people living longer, driving is becoming increasingly important in later life, helping older drivers to stay independent, socially connected and mobile.

But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.

“In the U.K., older drivers are tending to drive more often and over longer distances, but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations,” explained Dr Shuo Li, an expert in intelligent transport systems at Newcastle University.

“Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we’ve seen older drivers take to minimize risk. But this can be quite limiting for people.”

Potential game-changer

Self-driving cars are seen as a potential game-changer for this age group, Li noted. Fully automated, they are unlikely to require a license and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.

But it’s not as clear-cut as it seems, said Li.

“There are several levels of automation, ranging from zero where the driver has complete control, through to Level 5, where the car is in charge,” he explained. “We’re some way-off Level 5, but Level 3 may be a trend just around the corner.  This will allow the driver to be completely disengaged — they can sit back and watch a film, eat, even talk on the phone.”

“But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds,” he added. “For younger people that switch between tasks is quite easy, but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor.”

Newcastle University DriveLAB tests older drivers

Led by Newcastle University’s Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take back control of an automated car in different scenarios and also the quality of their driving in these different situations.

Using the University’s state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).

They experienced automated driving for a short period and were then asked to “take back” control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.

The starting point in all situations was “total disengagement” — turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.

The time taken to regain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), “active input” such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).

“In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers,” said Li. “Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph, that means our older drivers would have needed an extra 35m warning distance — that’s equivalent to the length of 10 cars.

“But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident,” he said.

In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups.

In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.

Newcastle University DriveLab

VOICE member Pat Wilkinson. Source: Newcastle University

Designing automated cars of the future

The research team also explored older drivers’ opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.

Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what’s happening on the road and where they are even when they are busy with another activity.

The research team are now looking at how the vehicles can be improved to overcome some of these problems and better support older drivers when the automated cars hit our roads.

“I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” said Newcastle University Prof. Phil Blythe, who led the study and is chief scientific advisor for the U.K. Department for Transport. “The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.”

“Two pillars of the Government’s Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge,” he added. “Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties.”

“It is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” — Newcastle University Prof. Phil Blythe

Case studies of older drivers

Pat Wilkinson, who lives in Rowland’s Gill, County Durham, has been supporting the DriveLAB research for almost nine years.

Now 74, the former Magistrate said it’s interesting to see how technology is changing and gradually taking the control – and responsibility – away from the driver.

“I’m not really a fan of the cars you don’t have to drive,” she said. “As we get older, our reactions slow, but I think for the young ones, chatting on their phones or looking at the iPad, you just couldn’t react quickly if you needed to either. I think it’s an accident waiting to happen, whatever age you are.”

“And I enjoy driving – I think I’d miss that,” Wilkinson said. “I’ve driven since I first passed my test in my 20s, and I hope I can keep on doing so for a long time.

“I don’t think fully driverless cars will become the norm, but I do think the technology will take over more,” she said. “I think studies like this that help to make it as safe as possible are really important.”

Ian Fairclough, 77 from Gateshead, added: “When you’re older and the body starts to give up on you, a car means you can still have adventures and keep yourself active.”

“I passed my test at 22 and was in the army for 25 years, driving all sorts of vehicles in all terrains and climates,” he recalled. “Now I avoid bad weather, early mornings when the roads are busy and late at night when it’s dark, so it was really interesting to take part in this study and see how the technology is developing and what cars might be like a few years from now.”

Fairclough took part in two of the studies in the VR simulator and said it was difficult to switch your attention quickly from one task to another.

“It feels very strange to be a passenger one minute and the driver the next,” he said. “But I do like my Toyota Yaris. It’s simple, clear and practical.  I think perhaps you can have too many buttons.”

Wilkinson and Fairclough became involved in the project through VOICE, a group of volunteers working together with researchers and businesses to identify the needs of older people and develop solutions for a healthier, longer life.

The post Self-driving cars may not be best for older drivers, says Newcastle University study appeared first on The Robot Report.

Vegebot robot applies machine learning to harvest lettuce

Vegebot, a vegetable-picking robot, uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop.

A team at the University of Cambridge initially trained Vegebot to recognize and harvest iceberg lettuce in the laboratory. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, a local fruit and vegetable co-operative.

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The researchers published their results in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the U.K., iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot.” — Josie Hughes, University of Cambridge report co-author

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.

The Vegebot first identifies the “target” crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested. Finally, it cuts the lettuce from the rest of the plant without crushing it so that it is “supermarket ready.”

“For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

Vegebot designed for lettuce-picking challenge

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image. Then for each lettuce, the robot classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

Vegebot in the field

Vegebot uses machine vision to identify heads of iceberg lettuce. Credit: University of Cambridge

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognize healthy lettuce in the lab, the team then trained it in the field, in a variety of weather conditions, on thousands of real lettuce heads.

A second camera on the Vegebot is positioned near the cutting blade, and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In the future, robotic harvesters could help address problems with labor shortages in agriculture. They could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded.

However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6 million ($8.26 million U.S.) for the new CDT, which will support at least 50 Ph.D. students.

The post Vegebot robot applies machine learning to harvest lettuce appeared first on The Robot Report.

Cowen, MassRobotics collaborating on robotics & AI research


Cowen Inc. and MassRobotics today announced a collaboration to bring together their extensive market knowledge to advance research into the emerging robotics and artificial intelligence industry. Based in the Boston area, MassRobotics is a global hub for robotics, and the collective work of a group of engineers, rocket scientists, and entrepreneurs focused on the needs of the robotics community.

MassRobotics is the strategic partner of the Robotics Summit & Expo, which is produced by The Robot Report.

“The robotics and artificial intelligence industry is a rapidly expanding market, and one that will define the advancement of manufacturing and services on a global basis. We are thrilled to be partnering with such an innovative collective in MassRobotics, which was established through a shared vision of advancing the robotics industry,” said Jeffrey M. Solomon, Chief Executive Officer of Cowen. “Cowen has dedicated substantial time into the research of robotics and AI and we look forward to sharing our knowledge and capital markets expertise to support the emerging growth companies associated with MassRobotics.”

MassRoboticsRelated: MassRobotics, SICK partner to assist robotics startups

Fady Saad, Co-founder and Director of Partnerships of MassRobotics, added, “Cowen has a proven track record of delivering in-depth research across sectors, which allows them to understand the dynamic flow of the markets and provide capital to support emerging companies. Collectively we bring together the best of market research and industry knowledge in an effort to advance robotics and provide companies with opportunities for growth.”

About Cowen Inc.

Cowen Inc. is a diversified financial services firm that operates through two business segments: a broker dealer and an investment management division. The Company’s broker dealer division offers investment banking services, equity and credit research, sales and trading, prime brokerage, global clearing and commission management services. Cowen’s investment management segment offers actively managed alternative investment products. Cowen Inc. focuses on delivering value-added capabilities to our clients in order to help them outperform. Founded in 1918, the firm is headquartered in New York and has offices worldwide. Learn more at Cowen.com

About MassRobotics

MassRobotics is the collective work of a group of Boston-area engineers, rocket scientists, and entrepreneurs. With a shared vision to create an innovation hub and startup cluster focused on the needs of the robotics community, MassRobotics was born. MassRobotics’ mission is to help create and scale the next generation of successful robotics and connected device companies by providing entrepreneurs and innovative robotics/automation startups with the workspace and resources they need to develop, prototype, test, and commercialize their products and solutions.

The post Cowen, MassRobotics collaborating on robotics & AI research appeared first on The Robot Report.