Optimus Ride driverless shuttles launch in Brooklyn Navy Yard


Optimus Ride, a self-driving vehicle technology startup that spun out of MIT in 2015, today launched its driverless shuttle service inside a geo-fenced area at the Brooklyn Navy Yard. The 300-acre shipyard and industrial complex has more than 400 manufacturing businesses and 10,000 employees onsite.

Six Optimus Ride driverless shuttles are now transporting passengers inside a 1-mile area between the NYC Ferry stop at Dock 72 and the Yard’s Cumberland Gate at Flushing Avenue. This area acts as a connection point for thousands of daily commuters. Optimus Ride said the service, which is free and runs seven days a week, is expected to transport some 500 passengers daily.

“Launching our self-driving vehicle system in New York at the Brooklyn Navy Yard is yet another validation that not only is Optimus Ride’s system a safe, efficient means of transportation, but also that autonomous vehicles can solve real-world problems in structured environments – today,” said Dr. Ryan Chin, CEO and co-founder of Optimus Ride. “In addition, our system will provide access to and experience with autonomy for thousands of people, helping to increase acceptance and confidence of this new technology, which helps move the overall industry forward.”

Initially, Optimus Ride said there will be a safety driver and software operator in the vehicle while in operation. Eventually those safety operators will be removed, the company said, and engineers at its Boston office will remotely monitor the shuttles. Chin said the shuttles are programmed to drive between 10-15 miles per hour, which adheres to Navy Yard speed limits.

Optimus Ride raffled off autonomous shuttle rides at the Robotics Summit & Expo 2019, which is produced by The Robot Report. Its all-electric shuttles move at slow speeds, maxing out at 25 MPH, and include three LIDAR sensors (two in the front, one in back) and eight cameras.

Optimus Ride

Aerial view of the Brooklyn Navy Yard, which now has an autonomous shuttle service from Optimus Ride. | Credit: Optimus Ride

According to the New York Post, demand was initially slow for the driverless shuttle service today. But those passengers who did ride were happy with the service. According to the New York Post:

“One user, 46-year-old Carey Booth who commutes by ferry from her home in Astoria to her job at the Brooklyn Navy Yard every day, took a ride in the shuttle while on her way to work.

“The ride was extremely smooth, smoother than the Navy Yard buses,” Booth said. “The car adjusted really well on turns and when other cars were driving nearby. It seems like they’ve tested it really extensively.”

Booth said that she’d “love” to see the fleet expand its coverage.

“I’d be cool with these cars being outside the yard at some point. I’d definitely love to see them all over the yard first though, because right now it’s just from point A to point B,” Booth said, adding that “I trust these cars more than I trust the drivers out on the streets.”

Carlos de Jesus, 38, gave the shuttle a spin before he headed to his volunteer job in Greenpoint.

“This was my first time ever in a self-driven vehicle. If no one had told me, I wouldn’t have known it was self-driven,” said de Jesus, who lives nearby. “Breaking and turning might be a little quicker than you’re used to, but everything was fine.”

De Jesus called it a “smooth ride.”

Optimus Ride

Optimus Ride’s autonomous shuttle operating in the Union Point development in South Weymouth, MA. | Credit: Optimus Ride

Optimus Ride testing in other states

Optimus Ride is testing its Level 4 shuttles in other states as well. In California, Optimus Ride is heading to Paradise Valley Estates, a gated community in Fairfield, California. There it will provide self-driving tours to prospective residents and a shuttle service to residents to other locations within the property. This is expected to start in Q2 2019.

In June 2019, Optimus Ride began providing tenants at Brookfield Properties’ Halley Rise, a new $1.4 billion mixed-use development in Reston, Virginia, with a autonomous shuttle service to locations within the property. And for the two-plus years, the company has been testing in two Massachusetts locations: in Boston’s Seaport District and South Weymouth. The company says its shuttles have completed over 20,000 trips since 2015.

Optimus Ride raised an $18 million Series A round in November 2017 that was led by Greycroft Partners. Other participating investors included Emerson Collective, Fraser McCombs Capital, and MIT Media Lab director Joi Ito. Then in April 2019 it raised another $20.7 million in a prospective $60 million round.

Other companies working on autonomous vehicles that operate in geo-fenced areas include May Mobility, Navya, Perrone Robotics and Voyage. May Mobility, which raised $22 million in February 2019, is testing its shuttles across the US, while Voyage is targeting retirement communities. Navya, however, recently changed its strategic direction. Instead of deploying its own autonomous shuttles, Navya is now licensing its technology to third-party companies.

The post Optimus Ride driverless shuttles launch in Brooklyn Navy Yard appeared first on The Robot Report.

Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems

SAN JOSE, Calif. — Velodyne Lidar Inc. today announced that it has acquired Mapper.ai’s mapping and localization software, as well as its intellectual property assets. Velodyne said that Mapper’s technology will enable it to accelerate development of the Vella software that establishes its directional view Velarray lidar sensor.

The Velarray is the first solid-state Velodyne lidar sensor that is embeddable and fits behind a windshield, said Velodyne, which described it as “an integral component for superior, more effective advanced driver assistance systems” (ADAS).

The company provides lidar sensors for autonomous vehicles and driver assistance. David Hall, Velodyne’s founder and CEO invented real-time surround-view lidar systems in 2005 as part of Velodyne Acoustics. His invention revolutionized perception and autonomy for automotive, new mobility, mapping, robotics, and security.

Velodyne said its high-performance product line includes a broad range of sensors, including the cost-effective Puck, the versatile Ultra Puck, and the autonomy-advancing Alpha Puck.

Mapper.ai staffers to join Velodyne

Mapper’s entire leadership and engineering teams will join Velodyne, bolstering the company’s large and growing software-development group. The talent from Mapper.ai will augment the current team of engineers working on Vella software, which will accelerate Velodyne’s production of ADAS systems.

Velodyne claimed its technology will allow customers to unlock advanced capabilities for ADAS features, including pedestrian and bicycle avoidance, Lane Keep Assistance (LKA), Automatic Emergency Braking (AEB), Adaptive Cruise Control (ACC), and Traffic Jam Assist (TJA).

“By adding Vella software to our broad portfolio of lidar technology, Velodyne is poised to revolutionize ADAS performance and safety,” stated Anand Gopalan, chief technology officer at Velodyne. “Expanding our team to develop Vella is a giant step towards achieving our goal of mass-producing an ADAS solution that dramatically improves roadway safety.”

“Mapper technology gives us access to some key algorithmic elements and accelerates our development timeline,” Gopalan added. “Together, our sensors and software will allow powerful lidar-based safety solutions to be available on every vehicle.”

Mapper.ai to contribute to Velodyne software

Mapper.ai developers will work on the Vella software for the Velarray sensor. Source: Velodyne Lidar

“Velodyne has both created the market for high-fidelity automotive lidar and established itself as the leader. We have been Velodyne customers for years and have already integrated their lidar sensors into easily deployable solutions for scalable high-definition mapping,” said Dr. Nikhil Naikal, founder and CEO of Mapper, who is joining Velodyne. “We are excited to use our technology to speed up Velodyne’s lidar-centric software approach to ADAS.”

In addition to ADAS, Velodyne said it will incorporate Mapper technology into lidar-centric solutions for other emerging applications, including autonomous vehicles, last-mile delivery services, security, smart cities, smart agriculture, robotics, and unmanned aerial vehicles.

The post Velodyne Lidar acquires Mapper.ai for advanced driver assistance systems appeared first on The Robot Report.

COAST Autonomous to deploy first self-driving vehicles at rail yard

PASADENA, Calif. — COAST Autonomous today announced that Harbor Rail Services of California has selected it to deploy self-driving vehicles at the Kinney County Railport in Texas.

This groundbreaking collaboration is the first deployment of self-driving vehicles at a U.S. rail yard, said the companies. Harbor Rail and COAST teams have identified a number of areas where autonomous vehicles can add value, including staff transportation, delivery of supplies and equipment, perimeter security, and lawn mowing.

COAST Autonomous is a software and technology company focused on delivering autonomous vehicle (AV) solutions at appropriate speeds for urban and campus environments. COAST said its mission is to build community by connecting people with mobility solutions that put pedestrians first and give cities back to people.

COAST has developed a full stack of AV software that includes mapping and localization, robotics and artificial intelligence, fleet management and supervision systems. Partnering with proven manufacturers, COAST said it can provide a variety of vehicles equipped with its software to offer Mobility-as-a-Service (MaaS) to cities, theme parks, campuses, airports, and other urban environments.

The company said its team has experience and expertise in all aspects of implementing and operating AV fleets while prioritizing safety and the user experience. Last year, the company conducted a demonstration in New York’s Times Square.

Harbor Rail operates railcar repair facilities across the U.S., including the Kinney County Railport (KCRP), a state-of-the-art railcar repair facility that Harbor Rail operates near the U.S.-Mexico border. KCRP is located on 470 acres of property owned by Union Pacific, the largest railroad in North America. The facility prepares railcars to meet food-grade guidelines, so they are ready to be loaded with packaged beer in Mexico and return to the U.S. with product for distribution.

COAST completes mapping, ready to begin service

COAST has completed 3D mapping of the facility, a first step in any such deployment, and the first self-driving vehicle is expected to begin service at KCRP next month.

“Through the introduction of re-designed trucks, innovative process improvements and adoption of data-driven KPIs [key performance indicators], Harbor Rail successfully reduced railcar rejections rates from 30% to 0.03% in KCRP’s first year of operations,” said Mark Myronowicz, president of Harbor Rail. “However, I am always looking for ways to improve our performance and provide an even better service for our customers.”

COAST Autonomous to deploy first self-driving vehicles at rail yard

Source: COAST Autonomous

“At a large facility like KCRP, we have many functions that I am convinced can be carried out by COAST vehicles,” Myronowicz said. “This will free up additional labor to work on railcars, make us even more efficient, help keep the facility safe at night, and even cut the grass when most of us are asleep. This is a fantastic opportunity to demonstrate Harbor Rail’s commitment to being at the forefront of innovation and customer service.”

“This is an exciting moment for COAST, and we are looking forward to working with Harbor Rail’s industry-leading team,” said David M. Hickey, chairman and CEO of COAST Autonomous. “KCRP is exactly the type of facility that will show how self-driving technology can improve efficiency and cut costs.”

“While the futuristic vision of driverless cars has grabbed most of the headlines, COAST’s team has been focused on useful mobility solutions that can actually be deployed and create tremendous value for private sites, campuses, and urban centers,” he said. “Just as railroads are often the unsung heroes of the logistics industry, COAST’s vehicles will happily go about their jobs unnoticed and quietly change the world.”

The post COAST Autonomous to deploy first self-driving vehicles at rail yard appeared first on The Robot Report.

Perrone Robotics begins pilot of first autonomous public shuttle in Virginia

ALBEMARLE COUNTY, Va. — Perrone Robotics Inc., in partnership with Albemarle County and JAUNT Inc., last week announced that Virginia’s first public autonomous shuttle service began pilot operations in Crozet, Va.

The shuttle service, called AVNU for “Autonomous Vehicle, Neighborhood Use,” is driven by Perrone Robotics’ TONY (TO Navigate You) autonomous shuttle technology applied to a Polaris Industries Inc. GEM shuttle. Perrone Robotics said its Neighborhood Electric Vehicle (NEV) shuttle has industry-leading perception and guidance capabilities and will drive fully autonomously (with safety driver) through county neighborhoods and downtown areas on public roads, navigating vehicle, and pedestrian traffic. The base GEM vehicle meets federal safety standards for vehicles in its class.

“With over 33,000 autonomous miles traveled using our technology, TONY-powered vehicles bring the highest level of autonomy available in the world today to NEV shuttles,” said Paul Perrone, founder/CEO of Perrone Robotics. “We are deploying an AV platform that has been carefully refined since 2003, applied in automotive and industrial autonomy spaces, and now being leveraged to bring last-mile services to communities such as those here in Albemarle County, Va. What we deliver is a platform that operates shuttles autonomously in complex environments with roundabouts, merges, and pedestrian-dense areas.”

The TONY-based AVNU shuttle will offer riders trips within local residential developments, trips to connect neighborhoods, and connections from these areas to the downtown business district.

Polaris GEM partner of Perrone Robotics

Perrone Robotics provides autonomy for Polaris GEM shuttles. Source: Polaris Industries

More routes to come for Perrone AVNU shuttles

After the pilot phase, additional routes will be demonstrate Albemarle County development initiatives such as connector services for satellite parking. They will also connection with JAUNT‘s commuter shuttles, also targeted for autonomous operation with TONY technology.

“We have seen other solutions out there that require extensive manual operation for large portions of the course and very low speeds for traversal of tricky sections,” noted Perrone.  “We surpass these efforts by using our innovative, super-efficient, and completely novel and patented autonomous engine, MAX®, that has over 16 years of engineering and over 33,000 on and off-road miles behind it. We also use AI, but as a tool, not a crutch.”

“It is with great pleasure that we launch the pilot of the next generation of transportation — autonomous neighborhood shuttles — here in Crozet,” said Ann MallekWhite Hall District Supervisor. “Albemarle County is so proud to support our home town company, Perrone Robotics, and work with our transit provider JAUNT, through Smart Mobility Inc., to bring this project to fruition.”

Perrone said that AVNU is electrically powered, so the shuttle is quiet and non-polluting, and it uses solar panels to significantly extend system range. AVNU has been extensively tested by Perrone Robotics, and testing data has been evaluated by Albemarle County and JAUNT prior to launch.

The post Perrone Robotics begins pilot of first autonomous public shuttle in Virginia appeared first on The Robot Report.

Waymo self-driving cars OK’d to carry passengers in California


waymo

Waymo’s self-driving cars can now carry passengers in California. | Credit: Waymo

Waymo has been testing its self-driving cars in California for years. Now Alphabet’s self-driving car division has been granted a permit to carry passengers in the Golden State. Waymo is now part of California’s Autonomous Vehicle Passenger Service pilot program, joining Autox Technologies, Pony.ai and Zoox.

The permit, which was granted by the California Public Utilities Commission (CPUC) requires a Waymo safety operator to be behind the wheel at all times and doesn’t allow Waymo to charge riders. The permit is good for three years.

“The CPUC allows us to participate in their pilot program, giving Waymo employees the ability to hail our vehicles and bring guests on rides within our South Bay territory,” Waymo said in a statement. “This is the next step in our path to eventually expand and offer more Californians opportunities to access our self-driving technology, just as we have gradually done with Waymo One in Metro Phoenix.”

Waymo also received an exemption from the CPUC that allows it to use a third-party company to contract out safety operators. Waymo said all safety operators go through a proprietary driver training program. In a letter requesting the exemption, Waymo said that while its “team of test drivers will include some full-time Waymo employees, operating and scaling a meaningful pilot requires a large group of drivers who are more efficiently engaged through Waymo’s experienced and specialized third-party staffing providers.”

Waymo self-driving taxi service coming to California?

Of course, this permit opens the door for Waymo to eventually offer an autonomous taxi service in California. But a Waymo spokesperson said there was no timetable for rolling out a self-driving taxi-like service in California. For now, the Waymo service will be limited to its employees and their guests in the Silicon Valley area.

Waymo One, a commercial self-driving service, launched in December 2018 in Phoenix, Ariz. It has been offering rides to more than 400 volunteer testers. Waymo recently announced a partnership with Lyft. It will deploy 10 autonomous vehicles in the coming months that would be available through the Lyft app. There will be safety drivers behind the wheel in this partnership, too.

Calif. Autonomous Vehicle Disengagements 2018

CompanyDisengagements per 1000 miles (2018)Miles per Disengagement (2018)Miles Driven (2018)Miles per disengagement (2017)
Waymo0.0911,0171,271,5875,595.95
GM Cruise0.195,204.9447,6211,254.06
Zoox0.521,922.830,764282.96
Nuro0.971,028.324,680--
Pony.ai0.981,022.316,356--
Nissan4.75210.55,473208.36
Baidu4.86205.618,09341.06
AIMotive4.96201.63,428--
AutoX5.24190.822,710--
Roadstar.AI5.70175.37,539--
WeRide/JingChi5.71173.515,440.80--
Aurora10.0199.932,858--
Drive.ai11.91 83.94,616.6943.59
PlusAI18.4054.410,816--
Nullmax22.4044.63,036--
Phantom AI48.2020.74,149--
NVIDIA49.7320.14,1424.63
SF Motors90.56112,561--
Telenav166.676.03032
BMW219.514.641--
CarOne/Udelv260.27 3.8219--
Toyota393.702.5381--
Qualcomm416.632.4240.02--
Honda458.332.2168--
Mercedes Benz682.521.51,749.391.29
SAIC829.611.2634.03--
Apple871.651.179,745--
Uber2608.460.426,899--

Waymo’s track record in California

According to the California Department of Motor Vehicles (DMV), Waymo had the best-performing autonomous vehicles in the state for the second consecutive year. Some have said the DMV’s tracking method is too vague and has allowed companies to avoid reporting certain events.

Nonetheless, Waymo’s self-driving cars experienced one disengagement every 11,017 miles. That performance marks a 50 percent reduction in the rate and a 96 percent increase in the average miles traveled between disengagements compared to the 2017 numbers. In 2016, Waymo had one disengagement every 5,128 miles. Waymo also drove significantly more miles, up from 352,000 miles in 2017 to 1.2 million miles in 2018, which makes the performance even more impressive.

Waymo is also working on autonomous trucks. Waymo has hired 13 former employees from Anki, the once-popular consumer robotics company that closed down. Anki Co-Founder and CEO Boris Sofman was hired as Director of Engineering, Head of Trucking, Waymo.

The post Waymo self-driving cars OK’d to carry passengers in California appeared first on The Robot Report.

Self-driving cars may not be best for older drivers, says Newcastle University study

Self-driving cars may not be best for older drivers, says Newcastle University study

VOICE member Ian Fairclough and study lead Dr. Shuo Li in test of older drivers. Source: Newcastle University

With more people living longer, driving is becoming increasingly important in later life, helping older drivers to stay independent, socially connected and mobile.

But driving is also one of the biggest challenges facing older people. Age-related problems with eyesight, motor skills, reflexes, and cognitive ability increase the risk of an accident or collision and the increased frailty of older drivers mean they are more likely to be seriously injured or killed as a result.

“In the U.K., older drivers are tending to drive more often and over longer distances, but as the task of driving becomes more demanding we see them adjust their driving to avoid difficult situations,” explained Dr Shuo Li, an expert in intelligent transport systems at Newcastle University.

“Not driving in bad weather when visibility is poor, avoiding unfamiliar cities or routes and even planning journeys that avoid right-hand turns are some of the strategies we’ve seen older drivers take to minimize risk. But this can be quite limiting for people.”

Potential game-changer

Self-driving cars are seen as a potential game-changer for this age group, Li noted. Fully automated, they are unlikely to require a license and could negotiate bad weather and unfamiliar cities under all situations without input from the driver.

But it’s not as clear-cut as it seems, said Li.

“There are several levels of automation, ranging from zero where the driver has complete control, through to Level 5, where the car is in charge,” he explained. “We’re some way-off Level 5, but Level 3 may be a trend just around the corner.  This will allow the driver to be completely disengaged — they can sit back and watch a film, eat, even talk on the phone.”

“But, unlike level four or five, there are still some situations where the car would ask the driver to take back control and at that point, they need to be switched on and back in driving mode within a few seconds,” he added. “For younger people that switch between tasks is quite easy, but as we age, it becomes increasingly more difficult and this is further complicated if the conditions on the road are poor.”

Newcastle University DriveLAB tests older drivers

Led by Newcastle University’s Professor Phil Blythe and Dr Li, the Newcastle University team have been researching the time it takes for older drivers to take back control of an automated car in different scenarios and also the quality of their driving in these different situations.

Using the University’s state-of-the-art DriveLAB simulator, 76 volunteers were divided into two different age groups (20-35 and 60-81).

They experienced automated driving for a short period and were then asked to “take back” control of a highly automated car and avoid a stationary vehicle on a motorway, a city road, and in bad weather conditions when visibility was poor.

The starting point in all situations was “total disengagement” — turned away from the steering wheel, feet out of the foot well, reading aloud from an iPad.

The time taken to regain control of the vehicle was measured at three points; when the driver was back in the correct position (reaction time), “active input” such as braking and taking the steering wheel (take-over time), and finally the point at which they registered the obstruction and indicated to move out and avoid it (indicator time).

“In clear conditions, the quality of driving was good but the reaction time of our older volunteers was significantly slower than the younger drivers,” said Li. “Even taking into account the fact that the older volunteers in this study were a really active group, it took about 8.3 seconds for them to negotiate the obstacle compared to around 7 seconds for the younger age group. At 60mph, that means our older drivers would have needed an extra 35m warning distance — that’s equivalent to the length of 10 cars.

“But we also found older drivers tended to exhibit worse takeover quality in terms of operating the steering wheel, the accelerator and the brake, increasing the risk of an accident,” he said.

In bad weather, the team saw the younger drivers slow down more, bringing their reaction times more in line with the older drivers, while driving quality dropped across both age groups.

In the city scenario, this resulted in 20 collisions and critical encounters among the older participants compared to 12 among the younger drivers.

Newcastle University DriveLab

VOICE member Pat Wilkinson. Source: Newcastle University

Designing automated cars of the future

The research team also explored older drivers’ opinions and requirements towards the design of automated vehicles after gaining first-hand experience with the technologies on the driving simulator.

Older drivers were generally positive towards automated vehicles but said they would want to retain some level of control over their automated cars. They also felt they required regular updates from the car, similar to a SatNav, so the driver has an awareness of what’s happening on the road and where they are even when they are busy with another activity.

The research team are now looking at how the vehicles can be improved to overcome some of these problems and better support older drivers when the automated cars hit our roads.

“I believe it is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” said Newcastle University Prof. Phil Blythe, who led the study and is chief scientific advisor for the U.K. Department for Transport. “The research here on older people and the use of automated vehicles is only one of many questions we need to address regarding older people and mobility.”

“Two pillars of the Government’s Industrial strategy are the Future of Mobility Grand Challenge and the Ageing Society Grand Challenge,” he added. “Newcastle University is at the forefront of ensuring that these challenges are fused together to ensure we shape future mobility systems for the older traveller, who will be expecting to travel well into their eighties and nineties.”

“It is critical that we understand how new technology can support the mobility of older people and, more importantly, that new transport systems are designed to be age friendly and accessible,” — Newcastle University Prof. Phil Blythe

Case studies of older drivers

Pat Wilkinson, who lives in Rowland’s Gill, County Durham, has been supporting the DriveLAB research for almost nine years.

Now 74, the former Magistrate said it’s interesting to see how technology is changing and gradually taking the control – and responsibility – away from the driver.

“I’m not really a fan of the cars you don’t have to drive,” she said. “As we get older, our reactions slow, but I think for the young ones, chatting on their phones or looking at the iPad, you just couldn’t react quickly if you needed to either. I think it’s an accident waiting to happen, whatever age you are.”

“And I enjoy driving – I think I’d miss that,” Wilkinson said. “I’ve driven since I first passed my test in my 20s, and I hope I can keep on doing so for a long time.

“I don’t think fully driverless cars will become the norm, but I do think the technology will take over more,” she said. “I think studies like this that help to make it as safe as possible are really important.”

Ian Fairclough, 77 from Gateshead, added: “When you’re older and the body starts to give up on you, a car means you can still have adventures and keep yourself active.”

“I passed my test at 22 and was in the army for 25 years, driving all sorts of vehicles in all terrains and climates,” he recalled. “Now I avoid bad weather, early mornings when the roads are busy and late at night when it’s dark, so it was really interesting to take part in this study and see how the technology is developing and what cars might be like a few years from now.”

Fairclough took part in two of the studies in the VR simulator and said it was difficult to switch your attention quickly from one task to another.

“It feels very strange to be a passenger one minute and the driver the next,” he said. “But I do like my Toyota Yaris. It’s simple, clear and practical.  I think perhaps you can have too many buttons.”

Wilkinson and Fairclough became involved in the project through VOICE, a group of volunteers working together with researchers and businesses to identify the needs of older people and develop solutions for a healthier, longer life.

The post Self-driving cars may not be best for older drivers, says Newcastle University study appeared first on The Robot Report.

Argo AI, CMU developing autonomous vehicle research center


Argo AI

Argo AI autonomous vehicle. | Credit: Argo AI

Argo AI, a Pittsburgh-based autonomous vehicle company, has donated $15 million to Carnegie Mellon University (CMU) to fund a new research center. The Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research will “pursue advanced research projects to help overcome hurdles to enabling self-driving vehicles to operate in a wide variety of real-world conditions, such as winter weather or construction zones.”

Argo was founded in 2016 by a team with ties to CMU (more on that later). The five-year partnership between Argo and CMU will fund research into advanced perception and next-generation decision-making algorithms for autonomous vehicles. The center’s research will address a number of technical topics, including smart sensor fusion, 3D scene understanding, urban scene simulation, map-based perception, imitation and reinforcement learning, behavioral prediction and robust validation of software.

“We are thrilled to deepen our partnership with Argo AI to shape the future of self-driving technologies,” CMU President Farnam Jahanian said. “This investment allows our researchers to continue to lead at the nexus of technology and society, and to solve society’s most pressing problems.”

In February 2017, Ford announced that it was investing $1 billion over five years in Argo, combining Ford’s autonomous vehicle development expertise with Argo AI’s robotics experience. Earlier this month, Argo unveiled its third-generation test vehicle, a modified Ford Fusion Hybrid. Argo is now testing its autonomous vehicles in Detroit, Miami, Palo Alto, and Washington, DC.

Argo last week released its HD maps dataset, Argoverse. Argo said this will help the research community “compare the performance of different (machine learning – deep net) approaches to solve the same problem.



“Argo AI, Pittsburgh and the entire autonomous vehicle industry have benefited from Carnegie Mellon’s leadership. It’s an honor to support development of the next-generation of leaders and help unlock the full potential of autonomous vehicle technology,” said Bryan Salesky, CEO and co-founder of Argo AI. “CMU and now Argo AI are two big reasons why Pittsburgh will remain the center of the universe for self-driving technology.”

Deva Ramanan, an associate professor in the CMU Robotics Institute, who also serves as machine learning lead at Argo AI, will be the center’s principal investigator. The center’s research will involve faculty members and students from across CMU. The center will give students access to the fleet-scale data sets, vehicles and large-scale infrastructure that are crucial for advancing self-driving technologies and that otherwise would be difficult to obtain.

CMU’s other autonomous vehicle partnerships

This isn’t the first autonomous vehicle company to see potential in CMU. In addition to Argo AI, CMU performs related research supported by General Motors, Uber and other transportation companies.

Its partnership with Uber is perhaps CMU’s most high-profile autonomous vehicle partnership, and it’s for all the wrong reasons. In 2015, Uber announced a strategic partnership with CMU that included the creation of a research lab near campus aimed at kick starting autonomous vehicle development.

But that relationship ended up gutting CMU’s National Robotics Engineering Center (NREC). More than a dozen CMU researchers, including the NREC’s director, left to work at the Uber Advanced Technologies Center.


Argo’s connection to CMU

As mentioned earlier, Argo’s co-founders have strong ties to CMU. Argo Co-founder and president Peter Rander earned his masters and PhD degrees at CMU. Salesky graduated from the University of Pittsburgh in 2002, but worked at the NREC for a number of years, managing a portfolio of the center’s largest commercial programs that included autonomous mining trucks for Caterpillar. In 2007, Salesky led software engineering for Tartan Racing, CMU’s winning entry in the DARPA Urban Challenge.

Salesky departed NREC and joined the Google self-driving car team in 2011 to continue the push toward making self-driving cars a reality. While at Google, Bryan he responsible for the development and manufacture of their hardware portfolio, which included self-driving sensors, computers and several vehicle development programs.

Brett Browning, Argo’s VP of Robotics, received his Ph.D. (2000) and bachelor’s degree in electrical engineering and science from the University of Queensland. He was a senior faculty member at the NREC for 12-plus years, pursuing field robotics research in defense, oil and gas, mining and automotive applications.

SwRI system tests GPS spoofing of autonomous vehicles


Southwest Research Institute has developed a cyber security system to test for vulnerabilities in automated vehicles and other technologies that use GPS receivers for positioning, navigation and timing.

“This is a legal way for us to improve the cyber resilience of autonomous vehicles by demonstrating a transmission of spoofed or manipulated GPS signals to allow for analysis of system responses,” said Victor Murray, head of SwRI’s Cyber Physical Systems Group in the Intelligent Systems Division.

GPS spoofing is a malicious attack that broadcasts incorrect signals to deceive GPS receivers, while GPS manipulation modifies a real GPS signal. GPS satellites orbiting the Earth pinpoint physical locations of GPS receivers embedded in everything from smartphones to ground vehicles and aircraft.

Illustration of a GPS spoofing attack. Credit: Simon Parkinson

SwRI designed the new tool to meet United States federal regulations. Testing for GPS vulnerabilities in a mobile environment had previously been difficult because federal law prohibits over-the-air re-transmission of GPS signals without prior authorization.

SwRI’s spoofing test system places a physical component on or in line with a vehicle’s GPS antenna and a ground station that remotely controls the GPS signal. The system receives the actual GPS signal from an on-vehicle antenna, processes it and inserts a spoofed signal, and then broadcasts the spoofed signal to the GPS receiver on the vehicle. This gives the spoofing system full control over a GPS receiver.

Related: Watch SwRI engineers trick object detection system

While testing the system on an automated vehicle on a test track, engineers were able to alter the vehicle’s course by 10 meters, effectively causing it to drive off the road. The vehicle could also be forced to turn early or late.

“Most automated vehicles will not rely solely on GPS because they use a combination of sensors such as lidar, camera machine vision, GPS and other tools,” Murray said. “However, GPS is a basis for positioning in a lot of systems, so it is important for manufacturers to have the ability to design technology to address vulnerabilities.”

SwRI develops automotive cybersecurity solutions on embedded systems and internet of things (IoT) technology featuring networks and sensors. Connected and autonomous vehicles are vulnerable to cyber threats because they broadcast and receive signals for navigation and positioning.

The new system was developed through SwRI’s internal research program. Future related research will explore the role of GPS spoofing in drones and aircraft.

Editor’s Note: This article was republished from SwRI’s website.

Researchers back Tesla’s non-LiDAR approach to self-driving cars


 

If you haven’t heard, Tesla CEO Elon Musk is not a LiDAR fan. Most companies working on autonomous vehicles – including Ford, GM Cruise, Uber and Waymo – think LiDAR is an essential part of the sensor suite. But not Tesla. Its vehicles don’t have LiDAR and rely on radar, GPS, maps and other cameras and sensors.

“LiDAR is a fool’s errand,” Musk said at Tesla’s recent Autonomy Day. “Anyone relying on LiDAR is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.”

“LiDAR is lame,” Musk added. “They’re gonna dump LiDAR, mark my words. That’s my prediction.”

While not as anti-LiDAR as Musk, it appears researchers at Cornell University agree with his LiDAR-less approach. Using two inexpensive cameras on either side of a vehicle’s windshield, Cornell researchers have discovered they can detect objects with nearly LiDAR’s accuracy and at a fraction of the cost.

The researchers found that analyzing the captured images from a bird’s-eye view, rather than the more traditional frontal view, more than tripled their accuracy, making stereo camera a viable and low-cost alternative to LiDAR.

Tesla’s Sr. Director of AI Andrej Karpathy outlined a nearly identical strategy during Autonomy Day.

“The common belief is that you couldn’t make self-driving cars without LiDARs,” said Kilian Weinberger, associate professor of computer science at Cornell and senior author of the paper Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving. “We’ve shown, at least in principle, that it’s possible.”

LiDAR uses lasers to create 3D point maps of their surroundings, measuring objects’ distance via the speed of light. Stereo cameras rely on two perspectives to establish depth. But critics say their accuracy in object detection is too low. However, the Cornell researchers are saying the date they captured from stereo cameras was nearly as precise as LiDAR. The gap in accuracy emerged when the stereo cameras’ data was being analyzed, they say.

“When you have camera images, it’s so, so, so tempting to look at the frontal view, because that’s what the camera sees,” Weinberger says. “But there also lies the problem, because if you see objects from the front then the way they’re processed actually deforms them, and you blur objects into the background and deform their shapes.”

Cornell researchers compare AVOD with LiDAR, pseudo-LiDAR, and frontal-view (stereo). Ground- truth boxes are in red, predicted boxes in green; the observer in the pseudo-LiDAR plots (bottom row) is on the very left side looking to the right. The frontal-view approach (right) even miscalculates the depths of nearby objects and misses far-away objects entirely.

For most self-driving cars, the data captured by cameras or sensors is analyzed using convolutional neural networks (CNNs). The Cornell researchers say CNNs are very good at identifying objects in standard color photographs, but they can distort the 3D information if it’s represented from the front. Again, when Cornell researchers switched the representation from a frontal perspective to a bird’s-eye view, the accuracy more than tripled.

“There is a tendency in current practice to feed the data as-is to complex machine learning algorithms under the assumption that these algorithms can always extract the relevant information,” said co-author Bharath Hariharan, assistant professor of computer science. “Our results suggest that this is not necessarily true, and that we should give some thought to how the data is represented.”

“The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car,” said Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper. “The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry.”

Understand.ai accelerates image annotation for self-driving cars

Understand.AI accelerates image annotation for self-driving cars

Using processed images, algorithms learn to recognize the real environment for autonomous driving. Source: understand.ai

Autonomous cars must perceive their environment accurately to move safely. The corresponding algorithms are trained using a large number of image and video recordings. Single image elements, such as a tree, a pedestrian, or a road sign must be labeled for the algorithm to recognize them. Understand.ai is working to improve and accelerate this labeling.

Understand.ai was founded in 2017 by computer scientist Philip Kessler, who studied at the Karlsruhe Institute of Technology (KIT), and Marc Mengler.

“An algorithm learns by examples, and the more examples exist, the better it learns,” stated Kessler. For this reason, the automotive industry needs a lot of video and image data to train machine learning for autonomous driving. So far, most of the objects in these images have been labeled manually by human staffers.

“Big companies, such as Tesla, employ thousands of workers in Nigeria or India for this purpose,” Kessler explained. “The process is troublesome and time-consuming.”

Accelerating training at understand.ai

“We at understand.ai use artificial intelligence to make labeling up to 10 times quicker and more precise,” he added. Although image processing is highly automated, final quality control is done by humans. Kessler noted that the “combination of technology and human care is particularly important for safety-critical activities, such as autonomous driving.”

The labelings, also called annotations, in the image and video files have to agree with the real environment with pixel-level accuracy. The better the quality of the processed image data, the better is the algorithm that uses this data for training.

“As training images cannot be supplied for all situations, such as accidents, we now also offer simulations based on real data,” Kessler said.

Although understand.ai focuses on autonomous driving, it also plans to process image data for training algorithms to detect tumors or to evaluate aerial photos in the future. Leading car manufacturers and suppliers in Germany and the U.S. are among the startup’s clients.

The startup’s main office is in Karlsruhe, Germany, and some of its more than 50 employees work at offices in Berlin and San Francisco. Last year, understand.ai received $2.8 million (U.S.) in funding from a group of private investors.

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Building interest in startups and partnerships

In 2012, Kessler started to study informatics at KIT, where he became interested in AI and autonomous driving when developing an autonomous model car in the KITCar students group. Kessler said his one-year tenure at Mercedes Research in Silicon Valley, where he focused on machine learning and data analysis, was “highly motivating” for establishing his own business.

“Nowhere else can you learn more within a shortest period of time than in a startup,” said Kessler, who is 26 years old. “Recently, the interest of big companies in cooperating with startups increased considerably.”

He said he thinks that Germany sleepwalked through the first wave of AI, in which it was used mainly in entertainment devices and consumer products.

“In the second wave, in which artificial intelligence is applied in industry and technology, Germany will be able to use its potential,” Kessler claimed.