FMCW LiDARs vs Imaging RADARs — The Battle of 4D

lidar Feb 22, 2022

In this article, we'll talk about the future of Perception SENSORS, in particular about FMCW LiDARs and Imaging RADARs, the future of their respective 3D version...As you may know, self-driving cars and autonomous robots mainly use 3 sensors for Perception:

  • CAMERAS — Used for classification, and context understanding
  • LiDARS (Light Detection And Ranging) — Used for 3D and Distance Estimation
  • RADARS (Radio Detection And Ranging) — Used for Velocity Estimation

This is what I say in my Intro course on Self-Driving Cars, and this is basically what the entire industry says: you need to use all 3 and mix them with Sensor Fusion. That's the vision that took self-driving cars from 2005 to 2020.

But what if this is not the one that will take self-driving cars from 2020 to 2030? There are still flaws in the current systems, there are still cases that can't work. For example, RADARs have lots of noise, and a pretty bad resolution, while LiDARs are blind during fog and can't measure speed.

To solve these cases, sensors will need to "evolve", to receive a mutation. In this article, I'd like to talk about two of these: LiDARs and RADARs. In particular, how LiDARs now evolve to 4D LiDARs, and how RADARs now evolve to 4D RADARs.

Let's study both, and decide which one has more potential...

From 2D LiDARs to 4D LiDARs

Introduction to LiDAR

LiDARs are sensors that send laser beam in a specific direction and return a point cloud, where each point has an XYZ position.
A couple of years ago, I was working on an autonomous shuttle and we were trying to solve the cost problem of LiDARs. At the time, a 3D LiDAR could easily cost 50,000€, and it was not really an option for us to build a million dollar car (still isn't). We decided to go with cheap (4,000€) 2D LiDARs; and to be more specific, 4-Layer LiDARs.
Instead of having 64 of 128 Layers like usual LiDARs have nowadays, it only had 4.Here is what it looks like:

3d-lidar

Needless to say that seeing the cost of 3D LiDARs dropping was a fantastic news for us, and I think we could now find some at the same cost we had.

3D LiDARs are giving us a great context, and they have a super-high resolution, with millions of points generated every second. The depth is very accurate (could be considered ground truth) and using the reflectivity of the points, we can even find lane lines or classify signs!

However:

  • They don't return the velocity, it needs to be calculated from two consecutive frames.
  • And they perform poorly under rain or fog.

Enters 4D LiDARs:

Introduction to FMCW LiDARs (Frequency Modulated Continuous Wave)

A 4D LiDAR is a LiDAR that can also directly measure the speed of an object. These are also named Doppler LiDARs or FMCW LiDARs because these LiDARs are now using the Doppler Effect calculated using the FMCW chip. It's the technology used in RADARs to directly measure the velocity.


Here's what the startup Blackmore is doing on LiDARs... notice how moving objects are colored while others aren't:

A 4D LiDAR that can estimate velocities and predict trajectoriesblue: approaching | red: recedingNot only we are adding the 4th dimension, but we also have the exact trajectory of every single point! Which means we have:

Built-In Optical Flow!

 This is due to something called the Doppler Effect. If you're familiar with RADARs, you know that the Doppler Effect is what's used there.

What is the Doppler Effect?

If you take a look at my RADAR article, you'll see this picture used to measure the range of an object:

This is how a RADAR works using FMCW — frequency modulated continuous wave. It sends continuous electromagnetic waves at a specific frequency — But when these waves hit an object, they're reflected at a different frequency.

To understand better, here's an image that shows it:

It's really all about the frequency of the reflections.

  • If it's high, it's approaching towards us.
  • If it's low, it's moving away from us.The Doppler Effect is exactly about measuring this frequency.
    Back to LiDARs:

4D LiDARs, aka Doppler LiDARs, use exactly this frequency modulation technique to get the velocity information.

The difference is that these LiDARs measure the frequency of the waves while 3D LiDARs were measuring the amplitude of the LiDARs waves. As a reminder, the frequency is about the wavelength, while the amplitude is about the "height" of the wave.

Here is a terrible (but clear) drawing to explain it:

In a traditional LiDAR, we don't look at the frequency, we're sending laser beams and measure the amplitude of the waves, and based on that amplitude, we consider it a point or not:

To sum up: 4D LiDARs do Frequency Modulation and 3D LiDARs do Amplitude Modulation.This might seem like a small shift, but look at what it can do according to self-driving car startup Aurora:
doppler-lidar-2
(source)

Indeed, we increase the range, but not only!

Here are a few characteristics of Blackmore's Doppler LiDAR (now Aurora):

  • Range: 450 m (better than LiDARs)
  • Field of View: 120° (not solid-state)
  • Velocity Accuracy: 0.1 m/s on objects moving up to 150 m/s
  • Resolution: 2 million points/second (similar to LiDARs)There are a few reasons why Doppler LiDARs are better than LiDARs, but the biggest one is the ability to measure speed directly instead of calculating the difference between two point clouds.

    Next, 4D RADARs!

4D RADARs (Imaging RADARs)

You probably saw the news saying how Tesla removed their RADAR from their car and are now relying entirely on cameras. One of the reasons they gave was that the RADAR's performance was so low that it was negatively affecting the Sensor Fusion algorithm.
Why is that? And how could a "mutation" save RADARs?
Let's begin with the basics:

RADAR stands for Radio Detection And Ranging. It works by emitting electromagnetic (EM) waves that reflect when they meet an obstacle. And unlike cameras or LiDARs, it can work under any weather condition, and even see underneath obstacles.
Thanks to the Doppler Effect, we can measure for every obstacle:
  • The XYZ position (sort of)
  • The VelocityWhich is already good, right? We have the 3D Position, plus the speed, which is already somehow 4D.
    So, let me show you what a RADAR outputs:
(source

Pretty bad, right?

First, it's written 3D, but it's really a 2D process. We don't have an accurate height of each point. The only reason it's called 3D is because the third dimension is the velocity of obstacles, directly estimated through the Doppler effect.
Of course, we can work on these "point clouds", apply Deep Learning (algorithms very similar to what I teach in my Deep Point Clouds course), and after a RADAR/Camera Fusion, we can even get a result like this:

radar-2
(source)

Notice how the yellow dot changes to a green color as soon as the car moves, and how each static object is orange, while moving objects have a color.

This is because RADARs can directly measure velocity using the exact same Doppler Effect that FMCW LiDARs now use.
As we said, there are a lot of flaws, which brings us to this comparison 4D RADARs.

Of course, this isn't a super accurate graph, I came up with it from what I could find, and some information might be a bit wrong.
But we can see that 4D RADARS are really good. In fact, they have even been accused on being:
RADARS on Steroids!
 But this is not the case. In fact, they work totally differently, using what's called MIMO ANTENNAS.

MIMO Antennas

(source)

4D RADARs work using MIMO (Multiple Input Multiple Output) antennas. Dozens of mini-antennas are sending waves all over the place, both in horizontal and vertical directions.

In a 3D RADAR, it's only a horizontal process, so we don't have the height, and we have a pretty bad resolution.

When analyzing all these antennas, we can get a much better resolution, range, and precision. We could in fact detect obstacles inside a vehicle, and classify children from parents.

So this is it, the power of self-driving cars in the palm of your hand.

4D LiDARs vs 4D RADARs

It wouldn't be fun if we didn't compare both.First, let's watch together a short video of a Doppler LiDAR from Blackmore (acquired by Aurora):

If we look at a video, we can see how well it can measure whether an object is going towards us (blue) or moving away (red):

On the RADAR side, here is a great video showing the new evolutions:

Now, here's a demo from Waymo's blog post. Do you notice how well it can see obstacles that are barely visible on cameras?

The MIMO antennas are producing the heatmap-like results.

Conclusion: My prediction

Several months ago, Intel's self-driving car startup Mobileye announced that in 2025, they'd be able to drive using just one front FMCW LiDAR and a few imaging RADARs.
Surprisingly, they didn't say "we'll use just the LiDARs now" or just the RADARs, they still intend to use both.
The entire point of these mutations is to make them so good they could standalone (with the camera). And this will probably be the case.
Because of the nature of the LiDAR, and because today, we can use Deep Learning with LiDARs very well, I'd say 4D LiDARs would be the sensor to remain.
Note my prediction, and come back to haunt me when I'm wrong.
If you liked this article, there are TONS of information I share daily in my Private Email List. If you'd like to join, here's the link.

Tags