3D perception adds a new capability to perceive the reality without occlusions
3D perception adds a new capability to perceive the reality without occlusions

Understanding Shadowless 3D Perception

Unlike cameras, which perceive reality from a single point of view, 3D native data from LiDAR opens up new possibilities.

A significant advantage of 3D LiDAR perception over other optical sensors like cameras or people counting solutions is its ability to collect environmental information without any obstructions (i.e., Shadowless perception).

To better understand this concept, let's look at a typical scene in a crowded environment with four individuals.

The scene we want to capture, a typical situation in crowded environments. The right part of the image shows a top-view version to better depict the position of each person
The scene we want to capture, a typical situation in crowded environments. The right part of the image shows a top-view version to better depict the position of each person.

These images show what different cameras in various positions would see from the scene:

Depending on the point of view of each camera, some persons are occluded by others
Depending on the point of view of each camera, some persons are occluded by others
As seen in the image, different people are occluded by others, depending on each camera position. Here, the adults cast a "shadow" that hides the child.

Under these conditions, even the most advanced computer vision algorithms for people counting or object tracking will struggle to detect the hidden individuals and consistently follow them over time.

Now, let's view the same scene through a LiDAR sensor. With 3D vision, LiDAR accurately determines the position and size of each person.

Lidar is not different in this aspect, each individual sensor can only see part of the scene
Lidar is not different in this aspect, each individual sensor can only see part of the scene
However, just like with a camera, the LiDAR's laser pulses must hit each object to detect their presence. If an object is hidden by another one it won't be detected.

Different LiDAR positions will miss different objects:

Different points of view of LiDAR create different blind zones
Different points of view of LiDAR create different blind zones

So, if LiDAR has the same limitations as cameras in these situations, why discuss LiDAR at all?

The answer is that 3D perception changes everything.

Because laser pulses are natively positioned in a 3D coordinate system (unlike camera images), advanced fusion software like Outsight's Shift Perception can seamlessly merge the data from each sensor into a global 3D point cloud:

In this image we show the merge of two different sensors that create a unique point-cloud
In this image we show the merge of two different sensors that create a unique point-cloud
With this approach, each LiDAR sensor does not perceive reality independently: each sensor's perception contributes to a shared pool of information.

As a result, an advanced Spatial AI solution like Outsight's Shift Analytics will leverage a unique kind of LiDAR sensor data, the equivalent of a virtual 3D sensor without occlusions, offering shadowless perception:

The final result is a 3D point-cloud that is independent on which specific sensor fed the common pool of data
The final result is a 3D point-cloud that is independent on which specific sensor fed the common pool of data

As shown in the image, 1 +1 = 3. The resulting perception is much better than the separate perception of the first sensor and the second one, thanks to spatial consistency.

The challenges of using Shadowless Perception

Calibration

To perfectly merge the data from different LiDAR sensors, it's important to use appropriate software, tools and methods.

If the LiDARs are not correctly aligned on the same 3D coordinate system, they can easily create phantom points: the same points seen by different LiDARs can be interpreted as belonging to different objects in the physical scene.

This alignment process, known as calibration, becomes exponentially more challenging as the number of sensors increases.

Calibrating a few LiDARs is much simpler than calibrating hundreds, a task our solution regularly handle for our customers in airports, train stations, and factories, to anonymously follow the movement of thousands of people in crowded environments, in real-time.

The challenge of Synchronization

When merging multiple point clouds from different LiDARs, a significant challenge is the lack of precise temporal synchronization.

Unlike cameras that capture an entire scene instantly with a flash, LiDARs scan the environment over a period (typically 100ms) to create a full 3D view.
Understanding How Lidar Works
3D LiDAR is a complex technology that enables unprecedented Spatial Intelligence. Many engineering choices are possible when building a new device.

Learn more about what's lidar and how it works

This means they can't observe all objects simultaneously. For example, a person scanned by one LiDAR at time t will be scanned by another LiDAR at time t + t1, during which the person may have moved and changed posture. This results in a blurred and imprecise fusion of data.

The challenge of creating accurate Analytics

Any Analytics software or Company that doesn't provide an appropriate Fusion & Processing capabilities able to Merge, Calibrate and Synchronise many LiDAR sensors precisely will struggle to provide correct metrics.

The benefits of occlusion-free perception

The advantages of shadowless perception translates in several concrete benefits:

  • Accurate Data Fusion:
    Merging data from multiple LiDAR sensors into a unified 3D point cloud ensures comprehensive scene understanding without phantom points.
  • Improved Analytics:
    Shadowless perception eliminates occlusions, allowing for accurate detection of all objects in a scene, even in crowded environments. This accuracy is essential for high-quality data analytics.
Shift Analytics from Outsight - Spatial AI Platform
Shift Analytics from Outsight - Spatial AI Platform
  • Enhanced Tracking:
    Continuous and unobstructed tracking of moving objects becomes possible, improving the reliability of tracking results.
  • Scalability:
    Advanced calibration and synchronization methods enable the integration of numerous sensors, making it scalable for large applications like airports, sports venues, and factories.
  • Less Sensors per Square Meter:
    Without a good fusion solution, more sensors per square meter are needed. Shadowless perception reduces the number of required sensors.
  • Robustness:
    Each sensor feeds into a common pool of information, so a hardware malfunction of a single device decreases only the number of available points but does not become a single point of failure.
  • Optimized Use of Different LiDAR Technologies:
    Different LiDAR manufacturers and models create different scanning patterns. A shadowless perception solution like Outsight's leverages the best of each technology, creating a full 3D virtual sensor that surpasses the capabilities of each device separately.

Related Articles