How to Select an "Ideal" LiDAR System for Autonomous Vehicles?

12/15 2025 380

Numerous autonomous vehicles are outfitted with LiDAR (Light Detection and Ranging), a sensor that leverages laser light to gauge distances. It projects laser beams that bounce off objects and return to the sensor. By analyzing the time or frequency shift of the reflected signal, the sensor computes the distance to the object. By organizing multiple ranging measurements according to their angles, a three-dimensional 'point cloud' is generated, depicting the shapes and positions of nearby objects. The main role of LiDAR is to provide the vehicle with information about the presence, location, and approximate size of surrounding objects.

How Does It Operate?

LiDAR employs two prevalent ranging techniques: Time-of-Flight (ToF) and Frequency-Modulated Continuous Wave (FMCW). The ToF method is straightforward: a pulse of light is emitted, and the time taken for it to return after reflection is measured. The distance is then calculated by multiplying this time by the speed of light. In contrast, FMCW emits a continuous light beam with a time-varying frequency. By mixing the reflected signal with the emitted one, distance and radial velocity (Doppler effect) can be simultaneously determined. FMCW excels in anti-interference and velocity measurement but is more intricate to implement and typically more expensive.

LiDAR systems can be broadly classified into two categories based on their hardware design: mechanical rotary and solid-state. Many early LiDAR systems were mechanical rotary, utilizing a motor to rotate the emission and reception units to achieve a wide horizontal field of view. These systems boast high point density but are bulky, and their reliability is compromised by mechanical parts. Solid-state LiDAR, encompassing MEMS, Optical Phased Array (OPA), and flash LiDAR, among others, offers advantages in compactness, durability, and ease of mass production and integration. However, their field of view, point density, and ranging capabilities differ.

When discussing LiDAR, frequently mentioned parameters include 'line count/channel count,' 'resolution,' 'field of view (FOV),' 'point rate (points per second),' 'ranging accuracy,' and 'wavelength.' Line count denotes the number of vertical layers; a higher count implies better vertical resolution, but more is not necessarily better. Effective point count, angular resolution, echo rate, and point density at long ranges are equally crucial. Common LiDAR wavelengths are 905nm and 1550nm, with 1550nm offering higher energy output for safety, providing advantages in detecting distant and weakly reflecting targets.

The performance of LiDAR is significantly influenced by the environment. Rain, snow, and fog cause scattering and generate numerous noise points, while direct sunlight or intense light increases background noise. Techniques such as filtering, multi-frame fusion, and noise modeling are commonly employed to mitigate these effects. However, no single sensor is flawless in extreme weather conditions. Consequently, many automakers currently adopt a sensor fusion strategy to enhance environmental perception accuracy.

What Are the Capabilities of LiDAR in Autonomous Driving?

When installed in a vehicle, LiDAR primarily serves three functions: generating three-dimensional geometric information of the surroundings, aiding in localization (in conjunction with high-definition maps), and providing a geometric foundation for subsequent detection and path planning. Compared to cameras, LiDAR directly provides distance information and is less susceptible to lighting conditions. Compared to millimeter-wave radars, LiDAR offers superior angular resolution and better object shape reconstruction, but millimeter-wave radars perform more reliably in rain, snow, or for direct radial velocity measurement. The current mainstream approach involves utilizing all three sensor types together, with cameras supplying rich semantic information (color, character recognition, etc.), millimeter-wave radars handling speed and high-penetration detection, and LiDAR providing precise three-dimensional localization and shape reconstruction, thereby complementing each other.

In the algorithm chain, LiDAR data processing adheres to a conventional workflow. It commences with denoising and coordinate transformation (converting sensor point clouds to vehicle or global coordinates), followed by multi-frame registration to enhance density if necessary. Subsequently, ground segmentation (separating the road surface from the point cloud), clustering (dividing point groups into independent objects), and feature extraction are conducted before entering the detection, classification, and tracking modules. Many technical solutions entail projecting point clouds into a bird's-eye view (BEV) or performing voxelization, followed by deep neural network-based detection or semantic segmentation. Processing necessitates balancing point cloud sparsity, large-scale variations, and real-time requirements, posing challenges for computational power and model design.

To ensure the practical implementation of autonomous driving solutions, various details must be considered, including time synchronization, calibration, sensor layout, thermal management, and protection. Time synchronization is paramount, as LiDAR outputs data at a high frequency. If not synchronized with IMU, cameras, and GPS, multi-sensor fusion will be compromised. Extrinsic calibration (precisely calibrating the relative pose between LiDAR and other sensors) must also be accurate to the millimeter level or with minimal angular errors; otherwise, localization and perception errors will accumulate. LiDAR generates substantial data volumes, necessitating front-end preprocessing, filtering, compression, or BEV projection on dedicated chips to alleviate the main computational burden. Labeling LiDAR point clouds is also more time-consuming than labeling images, making labeling tools, semi-automatic labeling, and simulation-generated point cloud data indispensable in the training process.

Project Selection and Testing Considerations

LiDAR offers substantial advantages, but its cost and engineering challenges deter many automakers from incorporating it into low-end models. The costs associated with LiDAR encompass hardware procurement, installation and debugging, long-term reliability maintenance (particularly for mechanical components), and the need for enhanced computational power and labeling. Due to limitations such as weather conditions and passive reflection characteristics, degradation strategies must be contemplated in the solution. This implies that when LiDAR performance deteriorates, the system should be capable of relying on cameras or millimeter-wave radars to continue functioning or reduce functionality levels.

For L4-level autonomous driving in confined areas, autonomous driving in freight parks, or services necessitating high-definition maps, LiDAR is virtually indispensable due to its ability to furnish stable shape and precise distance information. If the objective is mass production for ordinary passenger vehicles at the lowest possible cost, many automakers opt to utilize cameras and millimeter-wave radars as the primary sensors.

When selecting LiDAR, Smart Driving Frontier advises first clarifying the task scenario and safety objectives, then determining performance and redundancy strategies based on the scenario. Do not solely concentrate on line count when choosing a sensor; the overall point cloud quality is more significant. Pay heed to effective point rate, far-field resolution, field of view, ranging accuracy, and echo stability. The installation layout must account for occlusion, heat dissipation, ease of maintenance, and cleaning solutions (particularly in regions with frequent rain or snow). The system necessitates a comprehensive time synchronization solution and an IMU tightly coupled localization strategy, along with automatic calibration and online health monitoring to promptly detect extrinsic parameter drift or sensor abnormalities. Algorithmically, implement multi-sensor degradation strategies, multi-frame fusion, and noise suppression, and offload some point cloud preprocessing to dedicated hardware to conserve main computational resources.

Testing of LiDAR should encompass day and night conditions, varying intensities of rain and snow, fog, low-reflection materials (black objects, distant pedestrians, mint leaves, glass, etc.), and scenarios with intense light or backlighting. Whenever feasible, utilize a combination of real-world collected data and simulated data to expand the training set. The labeling system should incorporate uncertainty and confidence information to facilitate robust degradation of the decision-making system in edge scenarios.

Final Remarks

LiDAR is a single component in the sensor suite, and its usability is not solely contingent on hardware. Adequate sensor selection, installation and tuning, time synchronization, calibration, data collection and labeling, algorithm design, computational power allocation, and overall redundancy strategies must all be meticulously executed to genuinely translate LiDAR's performance into stable and reliable driving functions.

-- END --

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.