LiDAR vs Cameras vs Radar technology

An in-depth comparison of LiDAR, Cameras, and Radars' technology

This article explores the capabilities and limitations of each type of sensor, to provide a clear understanding of why LiDAR has emerged as a strong contender in computer vision tech race.

This article delves into the technology aspects and differences of these sensors in general, not for a specific market segment like Automotive (as explained here, most applications of LiDAR are not in this field).
If you're seeking a concise overview specifically for People Counting applications, download our comparative guide here.

Unique Perspectives: How Diverse Sensors See the World

Before delving into the relative strengths and weaknesses of these technologies, let's first provide a brief overview of how cameras, radars, and LiDAR systems operate and perceive the world around them.

Active vs. Passive Sensors

Generally speaking, sensors are devices that measure physical properties and transform them into signals suitable for processing, displaying, or storing.

Radar and LiDAR are Active sensors, different from Cameras, that are Passive.

Active sensors emit energy (e.g. Radio Waves or Laser light) and measure the reflected or scattered signal, while passive sensors detect the natural radiation or emission from the target or the environment (e.g. Sunlight or artificial light for Cameras).

How does lidar work? by emitting laser light pulses and measuring the distance between the lidar sensor and the object

Each sensing modality has its advantages and inconveniences.

  • As active sensors generate their own signals, external lighting conditions do not affect them. Both Radar and LiDAR function perfectly in total darkness and in direct sunlight, which is not the case for Cameras.

Click here to see an example of why this is important

The Insurance Institute for Highway Safety (“IIHS”) found that in darkness conditions camera- and radar-based Pedestrian Automatic Emergency Braking systems fail in every instance to detect pedestrians.[1]

Additionally, the Governor’s Highway Safety Association (“GHSA”) of the USA found in an evaluation of roadway fatalities in 2020, that 75% of pedestrian fatalities occur at night.[2]

[1] Jessica B. Cicchino, Insurance Institute for Highway Safety, Effects of Automatic Emergency Braking Systems on Pedestrian Crash Risk 20 (2022).

[2] Governor’s Highway Safety Association, Pedestrian Traffic Fatalities by State: 2020 Preliminary Data 16 (2021).

Beyond night vision, the impact of external lighting on cameras has far-reaching consequences. For instance, Computer Vision algorithms may fail in areas with shadows caused by objects (moving or static e.g. trees or buildings) and even in indoor settings when lighting conditions change (e.g., a door opening adding more light to the scene).
  • Weather conditions can significantly affect passive sensors since their sensing doesn't directly interact with physical phenomena like rain or fog. Instead, they can only work with the resulting image, making them more susceptible to weather-related limitations. However, active sensors can also be affected by adverse conditions, depending on their wavelength.
Bad weather conditions can adversely impact the data collected by Passive sensors
  • Passive sensors are typically more energy-efficient, whereas active sensors allocate a significant portion of their energy budget to emit laser light or radar waves. The same principle applies to their size and weight.
  • Passive sensors like Cameras are highly stealthy and difficult to detect. On the contrary, active sensors like Lidar or Radar emit signals that can be detected by other sensors, making them more conspicuous. This aspect holds paramount importance in Defense applications.
  • Similarly, Active sensors can encounter signal interference, known as cross-talking. Nevertheless, with the ongoing advancement of technology, this issue is progressively becoming less of a concern for LiDAR.

However, the most crucial disparities among these sensing modalities lie in two key aspects that warrant further exploration:

1) the inherently different types of perception they provide, and 2) the potential privacy concerns they may give rise to or help evade.

Perceiving Various Dimensions of the World

Each type of sensor operates in distinct sections of the electromagnetic spectrum, utilising signals with varying wavelengths.

lidar vs radar vs camera

Cameras capture colours in two dimensions, lacking any notion of depth. Consequently, a large object positioned far away from the device can have the same number of pixels as a small object situated close to the camera.

Camera's weakness: depth perception

This depth perception is not required in many simple Computer Vision tasks, such as classifying objects, where Cameras excel:

Computer vision

Indeed, the lack of depth perception in Computer Vision software can pose challenges in tasks that require precise detection of object position, size, or movement.

This capability is especially important for applications such a precise crowd monitoring at scale:

LiDAR Software Helps Airports to Manage Increased Traffic
See how and why Airports are increasingly using LiDAR-based software solutions to tackle their biggest challenges, leveraging the unique value of Spatial Intelligence.

In this example real persons and a printed representation are both displayed to the camera:

Camera's difficulty perceiving real people because of lack of depth perception

Active sensors like LiDAR and Radar can detect the distance of each object, in the case of LiDAR these measurements are done in 3 dimensions*, that is not only the depth but also the exact position of any object in the space.

*Indeed, this is only applicable to 3D LiDAR, which is frequently referred to as "LiDAR" due to its recent popularity, largely attributed to self-driving cars. 2D LiDAR has existed for decades prior to that, functioning more like conventional Radar, where only depth information in 2 dimensions was accessible.

In a nutshell, 3D LiDAR provides Spatial Data but can't detect colours:

3D lidar sensors are able to detect object size & distance, volume, speed, and shape while camera's can only detect color
The right sensor to use will depend on what aspects do you need to detect

That means that tasks such as tracking people at scale, individually and with cm accuracy, are much more appropriate for 3D Spatial sensors like LiDAR:

Outsight's software processes raw data from LiDAR to extract information like Object Position

Radar vs. LiDAR

As we've seen both sensors share many characteristics, mostly being active and able to detect distance.

The Key Differences Lie in Precision: Lidar's Laser-Level Accuracy versus Radar's Lower Resolution"

Lidar boasts laser-level accuracy, providing cm-level precision (mm precision in some 2D Lidars), while Radar's resolution is significantly lower, posing challenges in precise tracking and distinguishing individuals or objects in crowded environments.

One of the main reasons being the wavelengths or radio waves compared to laser:

Beam divergence matters

While Imaging Radar, the 3D version of Radar, shows promising potential and is currently in development, its capabilities are still limited.

Presently, Radar primarily detects information in 2D, making it suitable for various Automotive use cases but less applicable in many other contexts.

Respect for Privacy

Cameras were purposefully designed for human consumption of images, enabling them to capture and deliver information that closely mirrors what a person would perceive in reality (e.g. cinema, TV, portable cameras, smartphones...)

Getting ready for a portrait shoot in my living room I snapped this photo within a photo.

Only recently have the same images taken by cameras found applications in automated processing by computers, utilising technologies such as Computer Vision and Image Processing AI.

This remarkable capability has expanded to encompass the identification of individuals through advanced techniques like FaceID.

Facial recognition is risk for privacy

The use of cameras in facial recognition technology raises legitimate apprehensions about data security and personal privacy.

Consequently, the significance of safeguarding privacy has gained considerable attention from policymakers and regulators worldwide. State policies, such as the General Data Protection Regulation (GDPR) in the European Union, have been put in place to address these concerns by imposing strict guidelines and limitations on the usage of cameras and biometric data.

LiDAR stands out from cameras in this crucial aspect, thanks to its remarkable ability to uphold privacy.Unlike cameras, LiDAR devices generate point-cloud images that do not contain personally identifiable information or reveal sensitive characteristics such as gender. 
3D point cloud lidar scan showing the innate anonymity that lidar has
Point-cloud data doesn't allow to identify an individual

Other aspects to consider

The following chart presents a summary of the previous comparison points, along with some additional ones:

Spider and Radar chart showing lidar vs radar vs camera
  • Cost: Cameras exhibit the lowest cost, while LiDAR costs are rapidly decreasing, especially considering that 3D LiDAR sensors require fewer units per square meter compared to Cameras (refer to the Low density criteria in the chart), which translates into much less setup, wiring and networking costs.
  • Flexible mounting: 3D sensors offer the advantage of versatile installation options, enabling placement in various positions and granting many of them the ability to perceive in 360º. This feature significantly reduces setup constraints, making 3D LiDAR ideal for large infrastructure projects.
  • Range detection: Certain sensors, such as Stereovision Cameras, suffer from diminishing performance as the detection distance increases. In contrast, LiDAR sensors maintain consistent and reliable range detection capabilities even at greater distances.

Conclusion

In the world of automated processing, cameras have become a ubiquitous choice not necessarily due to their superiority as sensors for the task, but rather because of their widespread availability.

As we explored earlier in this article, the use of cameras may be suitable for certain applications like object classification, where their effectiveness is evident. However, when it comes to capturing complex Spatial Data of the physical world, these sensors reveal their limitations.

The intricacies of spatial data processing necessitate more specialised and sophisticated sensor technologies to overcome challenges and provide more accurate and comprehensive results.

While Radar shares the benefits of being an active sensor, akin to LiDAR, and equally excels in preserving privacy, its significantly lower resolution and precision disqualify it as a viable candidate for numerous use cases, such as crowd monitoring and precise traffic management.

The orders of magnitude difference in resolution makes Radar less capable of capturing intricate details and spatial data with the same level of accuracy as LiDAR.

In conclusion, the rapid advancements in LiDAR technology have ushered in a new era of affordability, reliability, and performance.As we have explored the numerous benefits it offers, from its active sensing capabilities to the preservation of privacy, it becomes evident that LiDAR has emerged as the optimal choice for a wide array of use cases and applications, but also in enabling new ones that were previously unimaginable with other sensors. 

As LiDAR continues to evolve and find wider integration, it holds the key to unlocking unprecedented insights and driving us into a more advanced and interconnected world.

However, while LiDAR data, in the form of point-cloud information, provides a wealth of Spatial data, this raw data alone is essentially useless without effective processing and analysis.

Extracting actionable insights and valuable information from point-cloud data requires sophisticated processing software that can interpret, analyze, and transform the data into meaningful outputs.This is precisely where Outsight comes into play.

Through advanced algorithms, cutting-edge techniques and a full set of tools, Outsight can derive valuable insights from LiDAR point-cloud data, enabling a wide range of applications across industries.

One such tool is the first Multi-Vendor LiDAR Simulator, an online platform that empowers our partners and customers to make informed decisions about which LiDAR to utilize, their optimal placement, and the projected performance and cost for any given project.

Introducing the first multi-vendor 3D LiDAR Simulator
Outsight has developed a LiDAR simulator for any use case and application, from airports to mobile robotics, smart cities and industrial applications.

Whitepaper download for real-time use cases for lidar technology

Related Articles