Magnetic microrobot can measure both cell stiffness and traction

Scientists have developed a tiny mechanical probe that can measure the inherent stiffness of cells and tissues as well as the internal forces the cells generate and exert on one another. Their new "magnetic microrobot" is the first such probe to be able to quantify both properties, the researchers report, and will aid in understanding cellular processes associated with development and disease.

A continuum robot inspired by elephant trunks

Conventional robots based on separate joints do not always perform well in complex real-world tasks, particularly those that involve the dexterous manipulation of objects. Some roboticists have thus been trying to devise continuum robots, robotic platforms characterized by infinite degrees of freedom and no fixed number of joints.

Teaching old robots new tricks

Robots, and in particular industrial robots, are programmed to perform certain functions. The Robot Operating System (ROS) is a very popular framework that facilitates the asynchronous coordination between a robot and other drives and/or devices. ROS has been a go-to means to enable the development of advanced capability across the robotics sector.

Southwest Research Institute (SwRI) and the ROS-I community often develop applications in ROS 2 , the successor to ROS 1. In many cases, particularly where legacy application code is utilized bridging back to ROS 1 is still very common, and one of the challenges in supporting the adoption of ROS for industry. This post does not aim to explain ROS, or any of the journey to migrating to ROS 2 in detail, but if interested as a reference, I invite you to read the following blogs by my colleagues, and our partners at Open Robotics/Open Source Robotics Foundation.

Giving an old robot a new purpose

Robots have been manufactured since the 1950s and, logically, over time there are newer versions with better properties and performance than their ancestors. And this is where the question comes in: how can you give the capability to those older but still functional robots?

This is becoming a more important question as the circular economy has gained momentum and understanding of the carbon footprint impact of the manufacture of robots that can be offset by reusing a functional robot. Each robot has its own capabilities and limitations and those must be taken into account. However, the question of “can I bring new life to this old robot?” always comes up, and this exact use case came up recently here at SwRI.

Confirming views of the camera to robot calibration. | Credit: ROS Industrial

In the lab, an older Fanuc robot seemed to be a good candidate to set up a system that could demonstrate basic Scan-N-Plan capabilities in an easy-to-digest way with this robot that would be constantly available for testing and demonstrations. The particular system was a demo unit from a former integration company and included an inverted Fanuc robot manufactured in 2008.

The demo envisioned for this system would be a basic Scan-N-Plan implementation that would locate and execute the cleaning of a mobile phone screen. Along the way, we encountered several obstacles that are described below.

Driver updates

Let’s talk first about the drivers. A driver is a software component that lets the operating system and a device communicate with each other. Each robot has its own drivers to properly communicate with whatever is going to instruct it on how to move. So when speaking of drivers, the handling of that is different from a computer’s driver to a robot’s driver. This is because a computer’s driver can be updated faster and easier than that of a robot.

When device manufacturers identify errors, they create a driver update that will correct them. In computers, you will be notified if a new update is available, you can accept the update and the computer will start updating. But in the world of industrial robots, including the Fanuc in the lab here, you need to manually upload the driver and the supporting software options to the robot controller. Once the driver software and options are installed, a fair amount of testing is needed to understand what the changes you made to the robot impacted elsewhere in the system. In certain situations, you may receive a robot with the required options needed to facilitate external system communication, however, it is always advised to check and confirm functionality.

With the passing of time, the robot will not communicate as fast as newer versions of the same model. So to obtain the best results, you will want to try to update your communication drivers, if available. The Fanuc robot comes with a controller that lets you operate it manually, via a teach pendant that is in the user’s hand at all times. It can be set to automatic and it will do what it has instructed via a simple cycle start. But all safety systems need to be functional and in the proper state for the system to operate.

The rapid position report of the robot’s state is very important for the computer’s software (in this case our ROS application) to know where the robot is and if it is performing the instructions correctly. This position is commonly known as the robot pose. For robotic arms, the information can be separated by joint states, and your laptop will probably have an issue with the old robot due to reporting these joint states at a slower speed while in auto mode than the ROS-based software on the computer expects. One way to solve this slow reporting is to update the drivers or by adding the correct configurations for your program to your robot’s controller, but that is not always possible or feasible.

Updated location of the RGB-D camera in the Fanuc cell. | Credit: ROS-Industrial

Another way to make the robot move as expected is to calibrate the robot with an RGB-D camera. To accomplish this, you must place the robot in a strategic position so that most of the robot is visible by the camera. Then view the projection of the camera and compare it to the URDF, which is a file that represents the model of the robot in simulation. Having both representations, in Rviz for example, you can change the origin of the camera_link, until you see that the projection is aligned with the URDF.

For the Scan n’ Plan application, the RGB-D camera was originally mounted on part of the robot’s end effector. But when we encountered this joint state delay, the camera was changed to a strategic position on the roof of the robot’s enclosure where it could view the base and the Fanuc robot for calibration to the simulation model as can be seen in the photos below. In addition, we set the robot to manual mode, where the user needed to hold the controller and tell the robot to start with the set of instructions given by the developed ROS-based Scan-N-Plan generated program.

Where we landed and what I learned

While not as easy as a project on “This Old House,” you can teach an old robot new tricks. It is very important to know the control platform of your robot. It may be that a problem is not with your code but with the robot itself, so it is always good to make sure the robot, associated controller and software work well and then seek alternatives to enable that new functionality within the constraints of your available hardware.

Though not always efficient in getting to the solution, older robots can deliver value when you systematically design the approach and work within the constraints of your hardware, taking advantage of the tools available, in particular those in the ROS ecosystem.

About the Author

Bryan Marquez was an engineer intern in the robotics department at the Southwest Research Institute.

The post Teaching old robots new tricks appeared first on The Robot Report.

Luminar launches 3D mapping software

luminar

Luminar is launching 3D mapping software built off technology it acquired from Civil Maps. | Source: Luminar

Luminar, an automotive technology development company, is expanding its software offerings to include high-definition, 3D maps that update automatically and are built from production vehicles also powered by Luminar software and hardware.

Luminar is making use of the technology it picked up in the second quarter of 2022, when it acquired Civil Maps, a developer of LiDAR maps for automotive uses. The company is demonstrating its new 3D mapping technology platform at CES on Luminar-equipped vehicles that are roaming around Las Vegas.

Luminar already signed on its first mapping customer, which the company did not name. Luminar’s first mapping customer will use the company’s data to further improve its AI engine. The customer will also help to improve Luminar’s perception software.

Luminar’s other offerings include its Sentinal software stack for consumer vehicles, which is made up of the company’s LiDAR-based Proactive Safety hardware and software product, as well as its LiDAR-based Highway Automation software.


Two vehicle models featuring Luminar’s software are making their North American debut at CES this year. The Volvo Ex90, an all-electric SUV that uses the company’s software and hardware as standard on every vehicle, is being shown in the US for the first time. Luminar’s Iris LiDAR is integrated into the roof line of the vehicle.

Additionally, SAIC’s Rising Auto R7, which started production in China last October, also uses Luminar’s technology. SAIC is one of China’s largest automakers.

“2022 marked an inflection point for Luminar, as the first of its kind to move from R&D to production vehicles,” Austin Russell, founder and CEO of Luminar, said. “Our big bet on production consumer vehicles and enhancing, not replacing, the driver is starting to pay off big time. I expect Luminar to make a sweeping impact in 2023 as the automotive industry continues to converge with our roadmap.”

The post Luminar launches 3D mapping software appeared first on The Robot Report.

Inuitive sensor modules bring VSLAM to AMRs

Inuitive

Inuitive introduces the M4.5S (center) and M4.3WN (right) sensor modules that add VSLAM for AMR and AGVs.

Inuitive, an Israel-based developer of vision-on-chip processors, launched its M4.5S and M4.3WN sensor modules. Designed to integrate into robots and drones, both sensor modules are built around the NU4000 vision-on-chip (VoC) processor adds depth sensing and image processing with AI and Visual Simultaneous Localization and Mapping (VSLAM) capabilities.

The M4.5S provides robots with enhanced depth from stereo sensing along with obstacle detection and object recognition. It features a field of view of 88×58 degrees, a minimum sensing range of 9 cm (3.54″) and a wide dynamic operating temperature range of up to 50 degrees Celsius (122 degrees Farenheit). The M4.5S supports the Robot Operating System (ROS) and has an SDK that is compatible with Windows, Linux and Android.

The M4.3WN features tracking and VSLAM navigation based on fisheye cameras and an IMU together with depth sensing and on-chip processing. This enables free navigation, localization, path planning, and static and dynamic obstacle avoidance for AMRs and AGVs. The M4.3WN is designed in a metal case to serve in industrial environments.

“Our new all-in-one sensor modules expand our portfolio targeting the growing market of autonomous mobile robots. Together with our category-leading vision-on-chip processor, we now enable robotic devices to look at the world with human-like visual understanding,” said Shlomo Gadot, CEO and co-founder of Inuitive. “Inuitive is fully committed to continuously developing the best performing products for our customers and becoming their supplier of choice.

The M4.5S and the M4.3WN sensor modules’ primary processing unit is Inuitive’s all-in-one NU4000 processor. Both modules are equipped with depth and RGB sensors that are controlled and timed by the NU4000. Data generated by the sensors and processed in real-time at a high frame rate by the NU4000, is then used to generate depth information for the host device.

The post Inuitive sensor modules bring VSLAM to AMRs appeared first on The Robot Report.

Researchers develop AV object detection system with 96% accuracy

A Waymo autonomous vehicle. | Source: Waymo

An international research team at the Incheon National University in South Korea has created an Internet-of-Things (IoT) enabled, real-time object detection system that can detect objects with 96% accuracy. 

The team of researchers created an end-to-end neural network that works with their IoT technology to detect objects with high accuracy in 2D and in 3D. The system is based on deep learning specialized for autonomous driving situations. 

“For autonomous vehicles, environment perception is critical to answer a core question, ‘What is around me?’ It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” Professor Gwanggil Jeon, leader of the project, said. “We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects,” he elaborates.

The team fed RGB images and point cloud data as input to YOLOv3. The identification algorithm then outputs classification labels and bounding boxes and accompanying confidence scores. 

The researchers then tested the performance of their system with the Lyft dataset and found that YOLOv3 was able to accurately detect 2D and 3D objects more than 96% of the time. The team sees many potential uses for their technology, including for autonomous vehicles, autonomous parking, autonomous delivery and for autonomous mobile robots. 

“At present, autonomous driving is being performed through LiDAR-based image processing, but it is predicted that a general camera will replace the role of LiDAR in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” Jeon said. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years.”

The team’s research was recently published in IEEE Transactions of Intelligent Transport SystemsAuthors on the paper include Jeon, Imran Ahmed, from Anglia Ruskin University’s School of Computing and. Information Sciences in Cambridge, and Abdellah Chehri, from the department of mathematics and computer science at the Royal Military College of Canada in Kingston, Canada. 

The post Researchers develop AV object detection system with 96% accuracy appeared first on The Robot Report.

Intel Labs introduces open-source simulator for AI

SPEAR creates photorealistic simulation environments that provide challenging workspaces for training robot behavior. | Credit: Intel

Intel Labs collaborated with the Computer Vision Center in Spain, Kujiale in China, and the Technical University of Munich to develop the Simulator for Photorealistic Embodied AI Research (SPEAR). The result is a highly realistic, open-source simulation platform that accelerates the training and validation of embodied AI systems in indoor domains. The solution can be downloaded under an open-source MIT license.

Existing interactive simulators have limited content diversity, physical interactivity, and visual fidelity. This realistic simulation platform allows developers to train and validate embodied agents for growing tasks and domains.

The goal of SPEAR is to drive research and commercialization of household robotics through the simulation of human-robot interaction scenarios.

It took more than a year with a team of professional artists to construct a collection of high-quality, handcrafted, interactive environments. The SPEAR starter pack features more than 300 virtual indoor environments with more than 2,500 rooms and 17,000 objects that can be manipulated individually.

These interactive training environments use detailed geometry, photorealistic materials, realistic physics, and accurate lighting. New content packs targeting industrial and healthcare domains will be released soon.

The use of highly detailed simulation enables the development of more robust embodied AI systems. Roboticists can leverage simulated environments to train AI algorithms and optimize perception functions, manipulation, and spatial intelligence. The ultimate outcome is faster validation and a reduction in time-to-market.

In embodied AI, agents learn from physical variables. Capturing and collating these encounters can be time-consuming, labor-intensive, and risky. The interactive simulations provide an environment to train and evaluate robots before deploying them in the real world.

Overview of SPEAR

SPEAR is designed based on three main requirements:

  1. Support a large, diverse, and high-quality collection of environments
  2. Provide sufficient physical realism to support realistic interactions and manipulation of a wide range of household objects
  3. Offer as much photorealism as possible, while still maintaining enough rendering speed to support training complex embodied agent behaviors

At its core, SPEAR was implemented on top of the Unreal Engine, which is an industrial-strength open-source game engine. SPEAR environments are implemented as Unreal Engine assets, and SPEAR provides an OpenAI Gym interface to interact with environments via Python.

SPEAR currently supports four distinct embodied agents:

  1. OpenBot Agent – well-suited for sim-to-real experiments, it provides identical image observations to a real-world OpenBot, implements an identical control interface, and has been modeled with accurate geometry and physical parameters
  2. Fetch Agent – modeled using accurate geometry and physical parameters, Fetch Agent is able to interact with the environment via a physically realistic gripper
  3. LoCoBot Agent – modeled using accurate geometry and physical parameters, LoCoBot Agent is able to interact with the environment via a physically realistic gripper
  4. Camera Agent – which can be teleported anywhere within the environment to create images of the world from any angle

The agents return photorealistic robot-centric observations from camera sensors, odometry from wheel encoder states as well as joint encoder states. This is useful for validating kinematic models and predicting the robot’s operation.

For optimizing navigational algorithms, the agents can also return a sequence of waypoints representing the shortest path to a goal location, as well as GPS and compass observations that point directly to the goal. Agents can return pixel-perfect semantic segmentation and depth images, which is useful for correcting for inaccurate perception in downstream embodied tasks and gathering static datasets.

SPEAR currently supports two distinct tasks:

  • The Point-Goal Navigation Task randomly selects a goal position in the scene’s reachable space, computes a reward based on the agent’s distance to the goal, and triggers the end of an episode when the agent hits an obstacle or the goal.
  • The Freeform Task is an empty placeholder task that is useful for collecting static datasets.

SPEAR is available under an open-source MIT license, ready for customization on any hardware. For more details, visit the SPEAR GitHub page.

The post Intel Labs introduces open-source simulator for AI appeared first on The Robot Report.