Tuesday, 4 February 2020

The Future of Sensors for Self-Driving Cars: All Roads, All Conditions

Robotic hands on steering wheel while driving autonomous car. 3D illustration.

Whatever your thoughts about how quickly autonomous vehicle technology will move forward, there is little doubt that it will need to rely on better and less expensive sensor technology than we have available today. Current test vehicles often have sensor suites costing over $100,000, and still can’t deal with all types of road and weather conditions.

To help provide some background context and assess the future potential of various sensor technologies, we assembled a panel of industry experts at Electronic Imaging 2020. They represented the major sensor modalities in automotive use today: lidar, radar, cameras, and thermal imaging. Everyone learned a lot, and there were some great takeaways that we’ll share with you in this writeup of the session.

Setting the Context: David Cardinal, ExtremeTech

To kick off the panel we set the stage with some background and goals for the session. For context, it is clear that there is a continuum of applications. It ranges from today’s “Level 2/2+” ADAS deployments all the way to the Holy Grail of Level 5 pursued by Waymo, with dozens of companies aimed at Level 4 shared-vehicle fleet deployments that are somewhere in between.

The Sensors for Autonomouse Vehicles panel was a featured session at Electronic Imaging 2020.As goals for the session, we set out 1) understanding the strengths and weaknesses of each technology, 2) how those will change going forward, and 3) how they will compete with and also complement each other as parts of an overall solution.

Dr. Nikhil Naikal, Velodyne Lidar

It was fitting that the granddaddy of automotive sensor companies, Velodyne, kicked off the panel. Its involvement dates back to the original DARPA Challenge, and the now-infamous “KFC Bucket” style of roof-mounted scanning lidar. While Velodyne is still the acknowledged market leader, it now faces literally dozens of competitors.

Velodyne's analysis of lidar streangths and weakness in multiple lighting conditions

Velodyne’s analysis of lidar strengths and weakness in multiple lighting conditions.

To face down competition, Velodyne has broadened its array of lidar to include units down to the diminutive Velobit that it expects to price at around $100 when it is available. The company is also anxious to live down its original rooftop design, as Naikall showed us photos of the Tesla that Velodyne retrofitted with a suite of nearly invisible Velarray lidar. While many of its new competitors tout adding more intelligence to the lidar itself, Velodyne is moving carefully to only add those elements of processing that they think are best distributed.

Evaluating Cameras for Automotive, Nicolas Touchard, DXOMARK

Measuring exposure response time and overshootBetween mandatory backup cameras and over 50 million front-facing cameras in vehicle ADAS systems, visible-light imaging is — along perhaps with parking sensors — the predominant form of sensor technology currently found in vehicles. For now, those systems are only provided to help human drivers, so if a lane-keeping camera loses the lines when the vehicle heads into the sun, the driver is in charge. But as ADAS and eventually self-driving systems become more advanced, it will be essential for vehicle cameras to perform well in all situations.

Camera benchmarking firm DXOMARK has done a lot of work in accurately characterizing camera image quality challenges that are unique to automotive, which its VP of marketing Nicolas Touchard shared with us. Quickly adapting exposure to sudden changes in light levels, like entering or leaving a tunnel, is an important requirement that requires careful measurement — of both the adaptation time and any resulting overshoot before the exposure settles to its new value. The ability to accurately sense LEDs that flicker at various frequencies is another important feature. DXOMARK has built custom hardware to allow automakers and suppliers to measure that for proposed camera designs.

Thermal Imaging in Automotive, Mike Walters, FLIR

Along with the difficulty of sensing distance, the other big knock on traditional cameras is that they don’t work well in bad light — deep shade, back light, or night time, for example. Thermal cameras avoid those issues by directly sensing the longer-wave radiation that emanates from anything that gives off heat. That makes them particularly effective in detecting cars, people, and animals. Mike Walters, from thermal industry leader FLIR, took us through a tour of some of the current use case for thermal cameras in vehicles with a series of compelling videos about their use in low-light, direct sunlight, and poor weather.

Thermal cameras are particularly effective at night, although they need help telling the color of stoplights.

Thermal cameras are particularly effective at night, although they need help telling the color of stoplights.

While deploying thermal cameras comes with its own unique challenges — traditional automotive glass is opaque to infrared so they can’t go inside windshields for example — they offer a lot of promise as a complement to other modes of sensing.

Automotive Radar, Greg Stanley, NXP Semiconductor

While lidar gets most of the press because of its impressive functionality, its lower-cost sibling radar is far more ubiquitous in automotive applications. Essentially all adaptive cruise control systems — even Tesla’s — use at least one radar. Most typical blind-spot monitoring systems also rely on radar. Some test vehicles, like the Cruise model shown below, have more than 20 of them, including three that swivel. Waymo’s minivans have six. Greg Stanley, from chip giant NXP, took us through some basics of how radar works, what it is capable of, and where it is heading.

Cruise's test vehicles have as many as twenty radar, five lidar, and 16 cameras.

Cruise’s test vehicles have as many as 20 radar, five lidar, and 16 cameras.

In particular, Stanley said that makers of radar units are looking to improve functionality, including by adding more object classification and vehicle localization capabilities. Like the other panelists, he stressed that vehicles need a complementary suite of sensors. For example, radar isn’t going to be helpful in reading speed limit signs or stoplights.

Sanjai Kohli, Visible Sensors

One casualty of the cold water thrown on visions of driverless vehicles being just around the corner has been startups with innovative technologies hoping to sell into that market. Sanjai Kohli was the founder of one of those — Visible Sensors. After raising $10 million in venture capital for a highly-effective version of a radar sensor technology, they were unable to find car companies or major suppliers willing to commit to purchasing them in production volumes any time soon. So, in a move that is quite unusual for Silicon Valley, they returned the money to their investors and went on to other endeavors.

While we can speculate on which of the hundreds of startups in the autonomous vehicle industry will be wildly successful, there is no doubt that many, and probably most, will ultimately meet a less-than-happy ending, so it was helpful for the audience members — many of whom are looking at getting into the field — to understand some of the practical realities of creating a business out of a great invention.

Radical New Sensor Architecture for Driverless Cars, Alberto Stochino, Perceptive

Looking at current automotive sensor architectures, sensing industry veteran Stochino came to the conclusion that truly advanced driverless technology — the kind required for L4 and L5 — would require a radically new approach. He founded Perceptive based on a vision of an all-digital platform with relatively-low-cost but high-performance sensors — antennas and cameras — around the periphery of a car, connected with optical fiber to a central processing core.

Perceptive is developing a flexible vehicle sensor architecture with low-cost but high-performance sensors coupled to a central processing core.

Perceptive is developing a flexible vehicle sensor architecture with low-cost but high-performance sensors coupled to a central processing core.

The biggest takeaway from the panel is that none of them believe that a single sensor modality will be sufficient for a true driverless vehicle. When asked about the argument that “people can drive with two eyes, why can’t cars?” their responses ranged from needing to be better than human drivers to a desire for true redundancy for safety. All of the panelists also agreed that it would be years before the advanced technology needed for L4 and above would be close to affordable for retail car buyers. So they are all determined to buckle up for the long, slow, adoption curve they expect as costs gradually come down with increased volume and innovation.

Top image credit: Getty Images

Now Read:



No comments:

Post a Comment