4D LiDARs vs 4D RADARS: Why the LiDAR vs RADAR comparison is more relevant today than ever

A few years ago, I was freelancing for a company who asked me if I could help them decide on their sensor suite for their last mile delivery shuttle. Their question? Camera vs LiDAR vs RADAR, which set of sensors to pick?

At the time (2020), what made the most sense was to combine all 3. "These sensors are complementary" I would reply — "The LiDAR is the most accurate sensor to detect a distance, the camera is mandatory for scene understanding, and the RADAR can see through objects and allows for direct velocity measurement".

In most articles, courses, or content related to autonomous tech sensors, the same answer has been given: Use all 3 sensors! Except for Elon Musk who really doesn't want anything else than the camera in a Tesla, the answer has been universally approved.

But the world has changed. In the recent few years, these sensors have evolved. If the LiDAR used to be weak in velocity estimation, and if the RADAR used to be very noisy, it's no longer the case. This mainly due to 2 emerging technologies: The "FMCW LiDAR" or "4D LiDAR", and the "Imaging RADAR", or "4D RADAR".

In this article, I want to describe the evolution of LiDAR and RADAR systems, and share with you some of the implications for the autonomous driving field.

Let's begin with the LiDAR:

LiDAR and RADAR sensors before 2020

I have to say that nothing crazy happened in 2020 (at least on this topic), but it's a new decade so let's use it as a reference.

The LiDAR pre-2020

First, the LiDAR (Light Detection and ranging).

In the 2010 decade, when we talked about LiDARs, what people meant was TOF LiDAR systems (Time Of Flight LiDAR). These are sensors that send a light pulse to the environment and measure the time it takes for a wave to come back. By measuring the time it takes to come back, we use the "time of flight" lidar principle to measure the distance of any object. Here is an example:

How a TOF LiDAR works

Depending on the number of vertical layers your LiDAR has, you can have a 2D or a 3D LiDAR, that creates a point cloud of the environment. Here is a comparison below:

2D vs 3D LiDAR Point Clouds

The advantages of these is in the accuracy of the measurements, we have a laser-like accuracy. Many Computer Vision systems are indeed trained using ToF LiDAR labels. But the drawback is that if you want to measure a velocity, you need to compute the difference between 2 consecutive timestamps.

The RADAR pre 2020

Now let's see the RADAR (Radio Detection and ranging):

You probably saw the news saying how Tesla removed their RADAR from their car and are now relying entirely on cameras and algorithms such as Occupancy Networks or HydraNets.

One of the reasons they gave was that the RADAR's performance was so low that it was negatively affecting the Sensor Fusion algorithm.
Why is that?

Let's begin with the basics:

RADAR stands for Radio Detection And Ranging. It works by emitting electromagnetic (EM) waves that reflect when they meet an obstacle. And unlike cameras or LiDARs, it can work under any weather condition, and even see underneath obstacles.
Thanks to the Doppler Effect, we can measure for every obstacle:
  • The XYZ position (sort of)
  • The Velocity

Which is already good, right? We have the 3D Position, plus the speed, so we have a nice 4D Output.

Output from a RADAR system

But let me show you the real output from a RADAR sensor:

(source)

Pretty bad, right?

First, it's written 3D, but it's really a 2D output. We don't have an accurate height of each point. The only reason it's called 3D is because the third dimension is the velocity of obstacles, directly estimated through the Doppler effect.

Of course, we can work on these "point clouds", apply Deep Learning (algorithms very similar to what I teach in my Deep Point Clouds course), and after a RADAR/Camera Fusion, we can even get a result like this:

A RADAR fused with a camera (source)

Notice how the yellow dot changes to a green color as soon as the car moves, and how each static object is orange, while moving objects have a color. This is because the RADAR is really good at measuring velocities.

Summary: LiDAR vs RADAR Pre-2020

If we summarize, we can see this output:

Camera vs LiDAR vs RADAR comparison (in this order)
Notice how the camera has lots of advantages related to scene understanding, RADAR has lots of strengths in its maturity, weather conditions, and velocity measurement, and the LiDAR has one key strenght: the distance estimation.

Now the problem is: How do we avoid using all 3 sensors?
In self-driving cars for example, the more sensors we use, the more expensive the car will be. What Elon Musk and Tesla did when removing the RADAR, and staying without a LiDAR, is the best economical decision for them to sell cars directly to consumers.

Today, we have access to new LiDAR and RADAR systems: the 4D LiDAR and the 4D RADAR. These both return a great depth estimation, a clean point cloud, and directly measure the velocity.

Let's see how:

The new RADAR and LiDAR sensors of self-driving cars

I mainly want to talk about 2 technologies. They aren't "new" technically speaking, but they're new in the use, and the adoption of the market. It's the FMCW LiDAR and the Imaging RADAR.

You'll see that if a LiDAR vs RADAR comparison made not too much sense before (because they were complementary), it now makes a lot of sense because they can become competitive.

First, the 4D LiDAR:

Introducing the FMCW LiDAR (Frequency Modulated Continuous Wave LiDAR) or 4D LiDAR

An FMCW LiDAR (or 4D LiDAR, or Doppler LiDAR) is a LiDAR that can return the depth information, but also directly measure the speed of an object. What happens behind the scenes if they steal the RADAR Doppler Technology and adapt it to a light sensor.

Here's what the startup Blackmore is doing on LiDARs... notice how moving objects are colored while others aren't:

A 4D LiDAR that can estimate velocities and predict trajectories — blue: approaching | red: receding (source: Blackmore / Aurora)

To generate this, the LiDAR uses the Doppler Effect — If you're interested in learning more about how FMCW LiDAR sensors use the Doppler effect, I have a post here detailing the process here.

The main idea can be seen on this image, where we play with the frequency of the returned wave to measure the velocity.

If a wave is reflected at a higher frequency, the the object is approaching. If lower, it's going away from us. (see it on the FMCW LiDAR post)

The Doppler Effect is exactly about measuring this frequency. And this has now been adopted in FMCW LiDAR systems, but still with light waves instead of radio waves.

The AMCW LiDAR: Another type of LiDAR

There isn't just the FMCW LiDAR that is "new". LiDARs can also do AMCW — Amplitude Modulated Continuous Wave modulation.

The difference is that these FMCW LiDARs measure the frequency of the waves while AMCW LiDARs were measuring the amplitude of the LiDARs waves. As a reminder, the frequency is about the wavelength, while the amplitude is about the "height" of the wave.

Here is a terrible (but clear) drawing to explain it:

In a traditional LiDAR, we don't look at the frequency, we're sending laser beams and measure the amplitude of the waves, and based on that amplitude, we consider it a point or not:

How an AMCW LiDAR makes a Point Cloud
To sum up: A Time of Flight does time-of-flight measurement, an FMCW LiDAR does Frequency Modulation and an AMCW LiDAR does Amplitude Modulation.

This might seem like a small shift, but look at what it can do according to self-driving car startup Aurora:

The Aurora Graph showing AMCW vs FMCW LiDAR technology (source)

Next:

Introducing the 4D RADAR (or Imaging RADAR)

When moving from 3D to 4D RADARs, we expect a much better resolution, and less noise. And this is what's happening with Imaging RADAR systems. These new sensors are becoming more and more popular, and have even been accused of being...

RADAR on steroids!

To understand better why, we need to understand the main concept they use called MIMO Antennas.

MIMO Antennas

4D RADARs work using MIMO (Multiple Input Multiple Output) antennas. Dozens of mini-antennas are sending waves all over the place, both in horizontal and vertical directions.

In a 3D RADAR, it's only a horizontal process, so we don't have the height, and we have a pretty bad resolution.

When analyzing all these antennas, we can get a much better resolution, range, and precision. We could in fact detect obstacles inside a vehicle, and classify children from parents.

So this is it, the power of self-driving cars in the palm of your hand.

Main idea behind MIMO Antennas (source)

Thanks to this addition, the RADAR can now get pretty cool results. What does it look like? Here's a demo from Waymo's blog post. Do you notice how well it can see obstacles that are barely visible on cameras?

View of the Waymo's Imaging RADAR (source)

Conclusion: 4D LiDARs vs 4D RADARs — Which sensor is the best?

Back in the day, any comparison didn't make sense because the sensors were highly complementary. But today, these sensors can be in competition. The evolution of these sensors bring a lot of benefits, so let's take our comparison from the beginning, and see where it improved.

Camera vs FMCW LiDAR vs Imaging RADAR — blue: Improved, red: Worse

What we can note is how both sensors got better, and could work as standalone, at least much better than their "other" versions.

  • On the LiDAR side: The FMCW makes it better with weather conditions and adds the velocity estimation.
  • On the RADAR side: The overall better resolution removes lots of noise, and makes it easier to find distances, classify, etc...

Sensors in Action

When I was at CES 2023, I could see these sensors in action through 2 startups: Aeva and Bitsensing. So let's visualize both solutions.

Aeva's 4D LiDAR:

Aeva's FMCW LiDAR that can estimate velocities and predict trajectories (blue: approaching | red: receding)

Bitsensing's Imaging RADAR:

My predictions for self-driving cars

Several months ago, Intel's self-driving car startup Mobileye announced that in 2025, they'd be able to drive using just one front FMCW LiDAR and a few imaging RADARs. Surprisingly, they didn't say "we'll use just the LiDARs now" or just the RADARs, they still intend to use both.

However, I still think that these sensors become more competitors today than they used to; if both can achieve the same purpose, then complimentary isn't needed anymore.

My prediction is that Imaging RADARs could help companies get rid of the expensive LiDAR and rely on a Computer Vision solution only; while FMCW LiDARs could help companies enhance their LiDAR even more and get closer to autonomy.

Next Steps

  • Learn about the FMCW LiDAR here
📨
If you want to learn more about LiDARs and cutting-edge technology, I'm sending emails every day about these technologies, and they're read by over 10,000 Engineers. You can join the daily emails here.