U.S. Robotics Roadmap calls for white papers for revision

U.S. Robotics Roadmap calls for white papers for revision

The U.S. National Robotics Roadmap was first created 10 years ago. Since then, government agencies, universities, and companies have used it as a reference for where robotics is going. The first roadmap was published in 2009 and then revised in 2013 and 2016. The objective is to publish the fourth version of the roadmap by summer 2020.

The team developing the U.S. National Robotics Roadmap has put out a call to engage about 150 to 200 people from academia and industry to ensure that it is representative of the robotics community’s view of the future. The roadmap will cover manufacturing, service, medical, first-responder, and space robotics.

The revised roadmap will also include considerations related to ethics and workforce. It will cover emerging applications, the key challenges to progress, and what research and development is needed.

Join community workshops

Three one-and-a-half-day workshops will be organized for community input to the roadmap. The workshops will take place as follows:

  • Sept. 11-12 in Chicago (organized by Nancy Amato, co-director of the Parasol Lab at Texas A&M University and head of the Department of Computer Science at the University of Ilinois at Urbana-Champaign)
  • Oct. 17-18 in Los Angeles (organized by Maja Mataric, Chan Soon-Shiong distinguished professor of computer science, neuroscience, and pediatrics at the University of Southern California)
  • Nov. 15-16 in Lowell, Mass. (organized by Holly Yanco, director of the NERVE Center at the University of Massachusetts Lowell)

Participation in these workshops will be by invitation only. To participate, please submit a white paper/position statement of a maximum length of 1.5 pages. What are key use cases for robotics in a five-to-10-year perspective, what are key limitations, and what R&D is needed in that time frame? The white paper can address all three aspects or focus on one of them. The white paper must include the following information:

  • Name, affiliation, and e-mail address
  • A position statement (1.5 pages max)

Please submit the white paper as regular text or as a PDF file. Statements that are too long will be ignored. Position papers that only focus on current research are not appropriate. A white paper should present a future vision and not merely discuss state of the art.

White papers should be submitted by end of the day Aug. 15, 2019, to roadmapping@robotics-vo.org. Late submissions may not be considered. We will evaluate submitted white papers by Aug. 18 and select people for the workshops by Aug. 19.

Roadmap revision timeline

The workshop reports will be used as the basis for a synthesis of a new roadmap. The nominal timeline is:

  • August 2019: Call for white papers
  • September – November 2019: Workshops
  • December 2019: Workshops reports finalized
  • January 2020: Synthesis meeting at UC San Diego
  • February 2020: Publish draft roadmap for community feedback
  • April 2020: Revision of roadmap based on community feedback
  • May 2020: Finalize roadmap with graphics design
  • July 2020: Publish roadmap

If you have any questions about the process, the scope, etc., please send e-mail to Henrik I Christensen at hichristensen@eng.ucsd.edu.

U.S. Robotics Roadmap calls for reviewers

Henrik I Christensen spoke at the Robotics Summit & Expo in Boston.

Editor’s note: Christensen, Qualcomm Chancellor’s Chair of Robot Systems at the University of California San Diego and co-founder of Robust AI, delivered a keynote address at last month’s Robotics Summit & Expo, produced by The Robot Report.

The post U.S. Robotics Roadmap calls for white papers for revision appeared first on The Robot Report.

Giving robots a better feel for object manipulation


A new learning system developed by MIT researchers improves robots’ abilities to mold materials into target shapes and make predictions about interacting with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch – and it may have fun applications in personal robotics, such as modelling clay shapes or rolling sticky rice for sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are “trained” using the models, to predict the outcomes of their interactions with objects, such as pushing a solid box or poking deformable clay. But traditional learning-based simulators mainly focus on rigid objects and are unable to handle fluids or softer objects. Some more accurate physics-based simulators can handle diverse materials, but rely heavily on approximation techniques that introduce errors when robots interact with objects in the real world.

In a paper being presented at the International Conference on Learning Representations in May, the researchers describe a new model that learns to capture how small portions of different materials – “particles” – interact when they’re poked and prodded. The model directly learns from data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of its touch. As the robot handles the objects, the model also helps to further refine the robot’s control.

In experiments, a robotic hand with two fingers, called “RiceGrip,” accurately shaped a deformable foam to a desired configuration – such as a “T” shape – that serves as a proxy for sushi rice. In short, the researchers’ model serves as a type of “intuitive physics” brain that robots can leverage to reconstruct three-dimensional objects somewhat similarly to how humans do.

“Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can accomplish amazing manipulation tasks that are far beyond the reach of current robots,” says first author Yunzhu Li, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to build this type of intuitive model for robots to enable them to do what humans can do.”

“When children are 5 months old, they already have different expectations for solids and liquids,” adds co-author Jiajun Wu, a CSAIL graduate student. “That’s something we know at an early age, so maybe that’s something we should try to model for robots.”

Joining Li and Wu on the paper are: Russ Tedrake, a CSAIL researcher and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL and the Center for Brains, Minds, and Machines (CBMM); and Antonio Torralba, a professor in EECS and director of the MIT-IBM Watson AI Lab.

A new “particle simulator” developed by MIT improves robots’ abilities to mold materials into simulated target shapes and interact with solid objects and liquids. This could give robots a refined touch for industrial applications or for personal robotics. | Credit: MIT

Dynamic graphs

A key innovation behind the model, called the “particle interaction network” (DPI-Nets), was creating dynamic interaction graphs, which consist of thousands of nodes and edges that can capture complex behaviors of so-called particles. In the graphs, each node represents a particle. Neighboring nodes are connected with each other using directed edges, which represent the interaction passing from one particle to the other. In the simulator, particles are hundreds of small spheres combined to make up some liquid or a deformable object.

The graphs are constructed as the basis for a machine-learning system called a graph neural network. In training, the model over time learns how particles in different materials react and reshape. It does so by implicitly calculating various properties for each particle — such as its mass and elasticity — to predict if and where the particle will move in the graph when perturbed.

The model then leverages a “propagation” technique, which instantaneously spreads a signal throughout the graph. The researchers customized the technique for each type of material – rigid, deformable, and liquid – to shoot a signal that predicts particles positions at certain incremental time steps. At each step, it moves and reconnects particles, if needed.

For example, if a solid box is pushed, perturbed particles will be moved forward. Because all particles inside the box are rigidly connected with each other, every other particle in the object moves the same calculated distance, rotation, and any other dimension. Particle connections remain intact and the box moves as a single unit. But if an area of deformable foam is indented, the effect will be different. Perturbed particles move forward a lot, surrounding particles move forward only slightly, and particles farther away won’t move at all. With liquids being sloshed around in a cup, particles may completely jump from one end of the graph to the other. The graph must learn to predict where and how much all affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrate the model by tasking the two-fingered RiceGrip robot with clamping target shapes out of deformable foam. The robot first uses a depth-sensing camera and object-recognition techniques to identify the foam. The researchers randomly select particles inside the perceived shape to initialize the position of the particles. Then, the model adds edges between particles and reconstructs the foam into a dynamic graph customized for deformable materials.

Because of the learned simulations, the robot already has a good idea of how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot starts indenting the foam, it iteratively matches the real-world position of the particles to the targeted position of the particles. Whenever the particles don’t align, it sends an error signal to the model. That signal tweaks the model to better match the real-world physics of the material.

Next, the researchers aim to improve the model to help robots better predict interactions with partially observable scenarios, such as knowing how a pile of boxes will move when pushed, even if only the boxes at the surface are visible and most of the other boxes are hidden.

The researchers are also exploring ways to combine the model with an end-to-end perception module by operating directly on images. This will be a joint project with Dan Yamins’s group; Yamin recently completed his postdoc at MIT and is now an assistant professor at Stanford University. “You’re dealing with these cases all the time where there’s only partial information,” Wu says. “We’re extending our model to learn the dynamics of all particles, while only seeing a small portion.”

Editor’s Note: This article was republished with permission from MIT News.

Robotics investments recap: March 2019

CloudMinds was among the robotics companies receiving funding in March 2019.

CloudMinds was among the robotics companies receiving funding in March 2019. Source: CloudMinds

Investments in robots, autonomous vehicles, and related systems totaled at least $1.3 billion in March 2019, down from $4.3 billion in February. On the other hand, automation companies reported $7.8 billion in mergers and acquisitions last month. While that may represent a slowdown, note that many businesses did not specify the amounts involved in their transactions, of which there were at least 58 in March.

Self-driving cars and trucks — including machine learning and sensor technologies — continued to receive significant funding. Although Lyft’s initial public offering was not directly related to autonomous vehicles, it illustrates the investments flowing for transportation.

Other use cases represented in March 2019 included surgical robotics, industrial automation, and service robots. See the table below, which lists amounts in millions of dollars where they were available:

CompanyAmt. (M$)TypeLead investor, partner, acquirerDateTechnology
Airbiquity15investmentDenso Corp., Toyota Motor Corp., Toyota Tsushu Corp.March 12, 2019connected vehicles
AROMA BIT Inc.2.2Series ASony Innovation FundMarch 3, 2019olofactory sensors
AtomRobotSeries B1Y&R CapitalMarch 5, 2019industrial automation
Automata7.4Series AABB March 19, 2019robot arm
Avidbots23.6Series BTrue VenturesMarch 21, 2019commercial floor cleaning
BoranetSeries AGobi PartnersMarch 6, 2019IIoT, machine vision
Broadmann1711Series AOurCrowdMarch 6, 2019deep learning, autonomous vehicles
Cloudminds300investmentSoftBank Vision FundMarch 26, 2019service robots
Corindus4.8private placementMarch 12, 2019surgical robot
Determined AI11Series AGV (Google Ventures)March 13, 2019AI, deep learning
Emergen Group29Series BQiming Venture PartnersMarch 13, 2019industrial automation
Fabu Technologypre-Series AQingsong FundMarch 1, 2019autonomous vehicles
FortnarecapitalizationThomas H. Lee PArtners LPMarch 27, 2019materlais handling
ForwardX14.95Series BHupang Licheng FundMarch 21, 2019autonomous mobile robots
Gaussian Robotics14.9Series BGrand Flight InvestmentMarch 20, 2019cleaning
Hangzhou Guochen Robot Technology15Series AHongcheng Capital, Yingshi Fund (YS Investment) March 13, 2019robotics R&D
Hangzhou Jimu Technology Co.Series BFlyfot VenturesMarch 6, 2019autonomous vehicles
InnerSpace3.2seedBDC Capital's Women in Technology FundMarch 26, 2019IoT
Innoviz Technologies132Series CChina Merchants Capital, Shenzhen Capital Group, New Alliance CapitalMarch 26, 2019lidar
Intelligent MarkinginvestmentBenjamin CapitalMarch 6, 2019autonomous robots for marking sports fields
Kaarta Inc.6.5Series AGreenSoil Building Innovation FundMarch 21, 2019lidar mapping
Kolmostar Inc.10Series AMarch 5, 2019positioning technology
Linear Labs4.5seedScience Inc., Kindred VenturesMarch 26, 2019motors
MELCO Factory Automation Philippines Inc.2.38new divisionMitsubishi Electric Corp.March 12, 2019industrial automation
Monet Technologies4.51joint ventureHonda Motor Co., Hino Motors Ltd., SoftBank Corp., Toyota Motor CorpMarch 28, 2019self-driving cars
Ouster60investmentRunway Growth Capital, Silicon Valley BankMarch 25, 2019lidar
Pickle Robot Co.3.5equity saleMarch 4, 2019loading robot
Preteckt2seedLos Olas Venture CapitalMarch 26, 2019machine learning automotive
Radar16investmentSound Ventures, NTT Docomo Ventures, Align Ventures, Beanstalk Ventures, Colle Capital, Founders Fund Pathfinder, Novel TMTMarch 28, 2019RFID inventory management
Revvo (IntelliTire)4Series ANorwest Venture PartnersMarch 26, 2019smart tires
Shanghai Changren Information Technology14.89Series AMarch 15, 2019Xiaobao healthcare robot
TakeOff Technologies Inc.equity saleMarch 26, 2019grocery robots
TartanSense2seedOmnivore, Blume Ventures, BEENEXTMarch 11, 2019weeding robot
Teraki2.3investmentHorizon Ventures, American Family VenturesMarch 27, 2019AI, automotive electronics
Think Surgical134investmentMarch 11, 2019surgical robot
Titan Medical25IPOMarch 22, 2019surgical robotics
TMiRobSeries B+Shanghai Zhangjiang Torch Venture Capital March 26, 2019hospital robot
TOYO Automation Co.investmentYamaha Motor Co.March 20, 2019actuators
UbtechinvestmentLiangjiang CapitalMarch 6, 2019humanoid
Vintra4.8investmentBonfire Ventures, Vertex Ventures, London Venture PartnersMarch 11, 2019machine vision
Vtrus2.9investmentMarch 8, 2019drone inspection
Weltmeister Motor450Series CBaidu Inc.March 11, 2019self-driving cars

And here are the mergers and acquisitions:

March 2019 robotics acquisitions

CompanyAmt. (M$)AcquirerDateTechnology
Accelerated DynamicsAnimal Dynamics3/8/2019AI, drone swarms
Astori AS4Subsea3/19/2019undersea control systems
BrainlabSmith & Nephew3/12/2019surgical robot
Figure Eight175Appen Ltd.3/10/2019AI, machine learning
Floating Point FXCycloMedia3/7/2019machine vision, 3D modeling
Florida Turbine Technologies60Kratos Defense and Security Solutions3/1/2019drones
Infinity Augmented RealityAlibaba Group Holding Ltd.3/21/2019AR, machine vision
Integrated Device Technology Inc.6700Renesas3/30/2019self-driving vehicle processors
MedineeringBrainlab3/20/2019surgical
Modern Robotics Inc.0.97Boxlight Corp.3/14/2019STEM
OMNI Orthopaedics Inc.Corin Group3/6/2019surgical robotics
OrthoSpace Ltd.220Stryker Corp.3/14/2019surgical robotics
Osiris Therapeutics660Smith & Nephew3/12/2019surgical robotics
Restoration Robotics Inc.21Venus Concept Ltd.3/15/2019surgical robotics
Sofar Ocean Technologies7Spoondrift, OpenROV3/28/2019underwater drones, sensors
Torc Robotics Inc.Daimler Trucks and Buses Holding Inc.3/29/2019driverless truck software

Surgical robots make the cut

One of the largest transactions reported in March 2019 was Smith & Nephew’s purchase of Osiris Therapeutics for $660 million. However, some Osiris shareholders are suing to block the acquisition because they believe the price that U.K.-based Smith & Nephew is offering is too low. The shareholders’ confidence reflects a hot healthcare robotics space, where capital, consolidation, and chasing new applications are driving factors.

In the meantime, Stryker Corp. bought sports medicine provider OrthoSpace Ltd. for $220 million. The market for sports medicine will experience a compound annual growth rate of 8.9% between now and 2023, predicts Market Research Future.

Freemont, Calif.-based Think Surgical raised $134 million for its robot-assisted orthopedic surgical device, and Titan Medical closed a $25 million public offering last month.

Venus Concept Ltd. merged with hair-implant provider Restoration Robotics for $21 million, and Shanghai Changren Information Technology raised Series A funding of $14.89 million for its Xiaobao healthcare robot.

Corindus Vascular Robotics Inc. added $5 million to the $15 million it had raised the month before. Brainlab acquired Medineering and was itself acquired by Smith & Nephew.

Driving toward automation in March 2019

Aside from Lyft, the biggest reported transportation robotics transaction in March 2019 was Renesas’ completion of its $6.7 billion purchase of Integrated Device Technology Inc. for its self-driving car chips.

The next biggest deal was Weltmeister Motor’s $450 million Series C, in which Baidu Inc. participated.

Lidar also got some support, with Innoviz Technologies raising $132 million in a Series C round, and Ouster raising $60 million. In a prime example of how driverless technology is “paying a peace dividend” to other applications, Google parent Alphabet’s Waymo unit offered its custom lidar sensors to robotics, security, and agricultural companies.

Automakers recognize the need for 3-D modeling, sensors, and software for autonomous vehicles to navigate safely and accurately. A Daimler unit acquired Torc Robotics Inc., which is working on driverless trucks, and CycloMedia acquired machine vision firm Floating Point FX. The amounts were not specified.

Speaking of machine learning, Appen Ltd. acquired dataset annotation company Figure Eight for $175 million, with an possible $125 million more based on 2019 performance. Denso Corp. and Toyota Motor Corp. contributed $15 million to Airbiquity, which is working on connected vehicles.

Service robots clean up

From retail to cleaning and customer service, the combination of improving human-machine interactions, ongoing staffing turnover and shortages, and companies with round-the-clock operations has contributed to investor interest.

The SoftBank Vision Fund participated in a $300 million round for CloudMinds. The Chinese AI and robotics company’s XR-1 is a humanoid service robot, and it also makes security robots and connects robots to the cloud.

According to its filing with the U.S. Securities and Exchange Commission, TakeOff Technologies Inc. raised an unspecified amount for its grocery robots, an area that many observers expect to grow as consumers become more accustomed to getting home deliveries.

On the cleaning side, Avidbots raised $23.6 million in Series B, led by True Ventures. Gaussian Robotics’ Series B was $14.9 million, with participation from Grand Flight Investment.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Wrapping up Q1 2019

China’s efforts to develop its domestic robotics industry continued, as Emergen Group’s $29 million Series B round was the largest reported investment in industrial automation last month.

Hangzhou Guochen Robot Technology raised $15 million in Series A funding for robotics research and development and integration.

That was followed by ABB’s participation in Series A funding of $7.4 million for Automata, which makes a small collaborative robot arm named Ava. Mitsubishi Electric Corp. said it’s spending $2.38 million to set up a new company, MELCO Factory Automation Philippines Inc., because it expects to grow its business there to $30 million by 2026.

Data startup Spopondrift and underwater drone maker OpenROV merged to form Sofar Ocean Technologies. The new San Francisco company also announced a Series A round of $7 million. Also, 4Subsea acquired underwater control systems maker Astori AS.

In the aerial drone space, Kratos Defense and Security Solutions acquired Florida Turbine Technologies for $60 million, and Vtrus raised $2.9 million for commercializing drone inspections. Kaarta Inc., which makes a lidar for indoor mapping, raised $6.5 million.

The Robot Report broke the news of Aria Insights, formerly known as CyPhy Works, shutting down in March 2019.


Editors Note: What defines robotics investments? The answer to this simple question is central in any attempt to quantify robotics investments with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and Investing
Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and Intelligent Systems Companies
Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, think, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification
Funding information is collected from a number of public and private sources. These include press releases from corporations and investment groups, corporate briefings, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded.

The post Robotics investments recap: March 2019 appeared first on The Robot Report.

Fears of job-stealing robots are misplaced, say experts

Fears of job-stealing robots are misplaced, say experts

Artificial intelligence will shift jobs, not replace them. | Reuters/Issei Kato

Some good news: The robots aren’t coming for your job. Experts at the Conference on the Future of Work at Stanford University last month said that fears that rapid advances in artificial intelligence, machine learning, and automation will leave all of us unemployed are vastly overstated.

But concerns over growing inequality and the lack of opportunity for many in the labor force — serious matters linked to a variety of structural changes in the economy — are well-founded and need to be addressed, four scholars on artificial intelligence and the economy told an audience at Stanford Graduate School of Business (GSB).

That’s not to say that AI isn’t having a profound effect on many areas of the economy. It is, of course. But understanding the link between the two trends is difficult, and it’s easy to make misleading assumptions about the kinds of jobs that are in danger of becoming obsolete.

“Most jobs are more complex than [many people] realize,” said Hal Varian, Google’s chief economist, during the forum, which was sponsored by the Stanford Institute for Human-Centered Artificial Intelligence.

Today’s workforce is sharply divided by levels of education, and those who have not gone beyond high school are affected the most by long-term changes in the economy, said David Autor, professor of economics at the Massachusetts Institute of Technology.

“It’s a great time to be young and educated. But there’s no clear land of opportunity” for adults who haven’t been to college, said Autor during his keynote presentation.

When predicting future labor market outcomes, it is important to consider both sides of the supply-and-demand equation, said Varian, founding dean of the School of Information at the University of California, Berkeley. Most popular discussion around technology focuses on factors that decrease demand for labor by replacing workers with machines.

However, demographic trends that point to a substantial decrease in the supply of labor are potentially larger in magnitude, he said. Demographic trends are also easier to predict, since we already know, aside from immigration and catastrophes, how many 40-year-olds will live in a country 30 years from now.

Comparing the most aggressive expert estimates about the impact of automation on labor supply with demographic trends that point to a workforce reduction, Varian said he found that the demographic effect on the labor market is 53% larger than the automation effect. Thus, real wages are more likely to increase than to decrease when both factors are considered.

Automation’s slow crawl

Why hasn’t automation had a more significant effect on the economy to date? The answer isn’t simple, but there’s one key factor: Jobs are made up of a myriad of tasks, many of which are not easily automated.

“Automation doesn’t generally eliminate jobs,” Varian said. “Automation generally eliminates dull, tedious, and repetitive tasks. If you remove all the tasks, you remove the job. But that’s rare.”

Consider the job of a gardener. Gardeners have to mow and water a lawn, prune rose bushes, rake leaves, eradicate pests, and perform a variety of other chores. Mowing and watering are easy tasks to automate, but other chores would cost too much to automate or would be beyond the capabilities of machines — so gardeners are still in demand.

Automation doesn’t generally eliminate jobs. Automation generally eliminates dull, tedious, and repetitive tasks. If you remove all the tasks, you remove the job. But that’s rare. –Hal Varian, chief economist, Google

Some jobs, including within the service industry, seem ripe for automation. However, a hotel in Nagasaki, Japan, was the subject of amused news reports when it was forced to “fire” its incompetent robot receptionists and room attendants.

Jobs, unlike repetitive tasks, tend not to disappear. In 1950, the U.S. Census Bureau listed 250 separate jobs. Since then, the only one to be completely eliminated is that of elevator operator, Varian observed. But some of the tasks carried out by elevator operators, such as greeting visitors and guiding them to the right office, have been distributed to receptionists and security guards.

Even the automotive industry, which accounts for roughly half of all robots used by industry, has found that automation has its limits.

“Excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated,” Elon Musk, the founder and chief executive of Tesla Motors, said last year.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

The pace of jobs change

Technology has always changed rapidly, and that’s certainly the case today. However, there’s often a lag between the time a new machine or process is invented and when it reverberates in the workplace.

“The workplace isn’t evolving as fast as we thought it would,” Paul Oyer, a Stanford GSB professor of economics and senior fellow at the Stanford Institute for Economic Policy Research, said during a panel discussion at the forum. “I thought the gig economy would take over, but it hasn’t. And I thought that by now people would find their ideal mates and jobs online, but that was wrong too.”

Consider the leap from steam power to electric power. When electricity first became available, some factories replaced single large steam engines on the factory floor with a single electric motor. That didn’t make a significant change to the nature of factory work, says Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy. But when machinery throughout the factory was electrified, work changed radically.

The rise of the service sector

Employment in some sectors in which employees tend to have less education is still strong, particularly the service sector. As well-paid professionals settle in cities, they create a demand for services and new types of jobs. MIT’s Autor called these occupations “wealth work jobs,” which include employment for everything from baristas to horse exercisers.

The 10 most common occupations in the U.S. include such jobs as retail salespersons, office clerks, nurses, waiters, and other service-focused work. Notably, traditional occupations, such as factory and other blue-collar work, no longer make the list.

Looming over all of the changes to the labor force is the stark fact that birth rates in the U.S. are at an all-time low, said Varian. As has been widely reported, the aging of the baby boom generation creates demand for service jobs but leaves fewer workers actively contributing labor to the economy.

Even so, the U.S. workforce is in much better shape than other industrialized countries. The so-called dependency ratio — the proportion of people over 65 compared with every 100 people of working age — will be much higher in Japan, Spain, South Korea, Germany, and Italy by 2050. And not coincidentally, said Varian, countries with high dependency ratios are looking the hardest at automating jobs.

As the country ages, society will have to find new, more efficient ways to train and expand the workforce, said the panelists. They will also have to better accommodate the growing number of women in the workforce, many of whom are still held back by family and household responsibilities.

The robots may not be taking over just yet, but advances in artificial intelligence and machine learning will eventually become more of a challenge to the workforce. Still, it’s heartening to be reminded that, for now, “humans are underrated.”

Editor’s note: This piece was originally published by Stanford Graduate School of Business.

The post Fears of job-stealing robots are misplaced, say experts appeared first on The Robot Report.

10 Most Automated Countries in the World

Robot density is a measurement that tracks the number of robots per 10,000 workers in an industry. According to the International Federation of Robotics (IFR), robot density in manufacturing industries increased worldwide in 2016. This shows more countries are turning to automation to fill their manufacturing needs. The average global robot density in 2016 was…

The post 10 Most Automated Countries in the World appeared first on The Robot Report.