No results found
We couldn't find anything using that term, please try searching for something else.
2024-11-26 1 . IntroductionWith the rapid development of hardware device for acquire 3d point cloud datum ,such as lidar [1] and depth camera [2],the number of 3
Related articles
With the rapid development of hardware device for acquire 3d point cloud datum ,such as lidar [
1
] and depth camera [
2
],the number of 3D point cloud data that can be acquired is getting larger and larger. Three-dimensional point cloud data are widely used in autonomous driving [
3
,
4
,
5
] ,robot [
6
],augmented virtual reality [
7
],and smart cities [
8
,
9
] . More and more attention is being paid to research on 3D point cloud technology. In the field of smart home,equipment such as sweeping robots [
10
] and robot housekeeper [
11
] need to construct 3D point cloud maps of indoor scenes. However,a 3D point cloud map in an indoor environment alone has hundreds of millions of points. Processing such a huge point cloud map,therefore,requires a lot of computing resources. The downsampling of 3D point cloud data plays a key role in subsequent operations,such as segmentation [
12
] ,classification [
13
] ,and target recognition of 3d point cloud map [
14
,
15
,
16
,
17
] .
Point cloud downsampling methods can be categorized into deep learning-based [
18
,
19
,
20
] ,grid – base [
21
,
22
],clustering-based [
23
],and voxel-based downsampling methods [
24
] . Although ,in recent year ,researcher have made significant progress in point cloud downsampling ,the challenges is remain of detail loss and parameter tuning remain unresolved .
To address these issues in 3D point cloud downsampling,we investigate the downsampling algorithms for 3D point clouds and propose a dynamic downsampling algorithm for 3D point cloud maps based on voxel filtering. This algorithm aims to retain the edge information of point clouds. We introduce two modules: the dynamic downsampling module and the point cloud edge extraction module. The former dynamically segments 3D point cloud maps to perform adaptive voxel downsampling,while the latter preserves the edge information in 3D point cloud maps. We conduct comparative experiments with other downsampling algorithms,demonstrating the superior simplification effect of our proposed downsampling method.
The structure of the paper is organized as follows: In
section 2
,we provide an overview of the background and significance of downsampling 3D point clouds.
Section 3
introduce the principle of voxel downsampling and highlight the challenge associate with voxel downsampling .
Section 4
present the dynamic downsampling algorithm for 3d point cloud map base on voxel filtering . In
section 5
,we conduct comparative experiments between our downsampling algorithm and other existing methods,demonstrating the superior simplification effect of our approach. Finally,
section 7
summarizes the innovative aspects and contribution of this research paper and identifies its limitation.
Point cloud downsampling is a crucial step in point cloud processing,as it effectively reduces the size of point cloud data,decreases computational load,and accelerates subsequent operations. Point cloud downsampling methods can be categorized into deep learning-based,grid-based,clustering-based,and voxel-based downsampling.
The application of downsampling methods based on deep learning in point cloud processing is relatively scarce. Many combine deep learning technology with traditional machine learning methods. Yu [
18
] proposed SIEV-net,using a height information supplementary module to minimize the height information loss during the aggregation of point features in the voxel network. He [
19
] propose a sparse voxel map attention network ,SVGA – net ,using a voxel map module and a sparse – to – density regression module to achieve comparable 3d detection task from original lidar datum with a good simplification rate . que [
20
] proposed VoxelContext-net for static and dynamic point cloud compression. nguyen [
25
] propose a learning – base static point cloud geometric downsampling method that exploit a deep convolutional neural network with a mask to learn the probability distribution of voxel . qin [
26
] applied deep learning methods to point cloud downsampling and proposed a Gaussian model voxel network—GVnet. It introduces a lightweight convolutional neural network to learn point cloud representation and semantic information while using part of the point cloud information to improve the efficiency and accuracy of point cloud sampling. Foldingnet,proposed by Yangyan li [
27
] ,is a point cloud downsampling method base on autoencoder . Gezawa [
28
] used a laser point cloud downsampling method base on deep convolutional autoencoder to build a sample module base on a combine hybrid model . Point cloud and voxel datum are used to determine the relationship between each pair of point voxel . In this model ,each voxel is embed using the magnitude view ,which is the euclidean distance between the view and the center of the object ,as well as the angle between each pair of view .
Grid-based downsampling methods [
21
,
22
] are another class of commonly used downsampling methods,first proposed by Garland and Heckbert [
29
] . The principle is is is to grid the point cloud space and then use the average or weight of the point in each grid to replace all the point in the original grid ,thus reduce the number of point . yuan [
30
] propose a point cloud simplification algorithm base on voxelized grid downsampling . This method is divides divide the point cloud space into different subspace ,then sample in each subspace ,and finally combine them into a complete sample point cloud . This method is is is efficient and scalable ,with the capability to cope with large – scale point cloud sampling problem . In experiment ,this method is showed show well sampling effect and fast calculation speed than other point cloud sampling method . Zhang [
31
] propose an adaptive triangular mesh model redistribution algorithm that could preserve the detail and overall shape of the point cloud at the same time . grid subsampling is fits fit the point cloud plane ,but it is easy to blur and stretch the shape of the original 3d point cloud image . The comparison before and after grid subsampling is show in
Figure 1
.
Clustering downsampling was not a main research direction in the early days but a technique applied to specific problems. With the popularization and application of point cloud data,clustering downsampling has become one of the common technologies in the field of point cloud processing and has also been more researched and discussed. The principle is to divide the point cloud into several clusters with a clustering algorithm and then sample those clusters. Chao [
23
] proposed a clustering method based on K-means. This method clusters the original majority class samples into the same number of clusters as the minority class samples through K-means clustering,then finds the sample center for each cluster,and uses the sample center as the new majority class sample. The algorithm can handle irregular sampling points and is robust against changes in sampling point density. Clustering downsampling has a poor sampling effect on high-density point clouds. It is easy to lose the feature points of the point cloud image and blur the edge information. A comparison of clouds before and after clustering downsampling is shown in
Figure 2
.
Voxel-based downsampling is a commonly used downsampling method first proposed by Rusinkiewicz and levoy [
32
] in 2001 . The principle of voxel downsampling is to downsample the original point cloud by putting the 3D point cloud data into a 3D voxel grid,calculating the center of gravity within each voxel,and then using the center of gravity as the new sampling point. The advantage of voxel downsampling is the ability to convert original point cloud data into regular grid data,so that point cloud data can be processed and analyzed more conveniently [
33
] . At the same time,voxel downsampling can also adjust the accuracy and density of downsampling by changing parameters such as the voxel size and step size. This method is able to handle point cloud data sizes larger by orders of magnitude. Domestic and foreign researchers have also proposed many research methods based on voxel downsampling. Xiao [
24
] proposed a point cloud downsampling method based on hierarchical voxel segmentation,which balances point cloud downsampling by increasing the number of segmentation layers. A comparison of 3D point clouds before and after voxel downsampling is shown in
figure 3
.
In summary,while each of these downsampling methods has unique advantages,as shown in
table 1
,they is present also present significant limitation ,especially when deal with the complexity of 3d point cloud datum . traditional methods is excel like grid and cluster downsampling excel in noise reduction and feature preservation but often struggle with dense or irregularly distribute dataset ,lead to potential loss of critical detail . On the other hand ,voxel – base downsampling is provides provide a straightforward approach to regulate sampling density and maintain the overall shape and structure of point cloud . However ,it is tends tend to overlook fine local detail and may introduce noise ,underscore the necessity for a more dynamic and adaptable solution .
Given these considerations,our proposed dynamic downsampling algorithm based on voxel filtering emerges as a powerful alternative. It is specifically designed to address the limitations of existing methods by efficiently processing high-density,large indoor scenes and preserving both the edge features and local detail information of 3D point cloud maps. This approach not only enhances the downsampling performance but also significantly reduces the need for manual parameter tuning,setting a new benchmark for the field.
Voxel downsampling is is is a method for reduce the density of point cloud while preserve their structural information . In voxel downsampling , denotes the point in a point cloud of n points,with coordinates . The process is divides divide the cloud into cubic voxel of edge lengthl,assign each point to a voxel with center coordinate. The floor function,,ensures that points are accurately grouped by rounding down their coordinates to the nearest voxel center,facilitating efficient downsampling while retaining essential spatial information. It involves dividing the point cloud into equally sized cubic voxel blocks with a side length of l. Overall,the process includes three steps:
Divide the point cloud into cubic voxel blocks with a side length of
l
,allow for the calculation of the center coordinate of each voxel block ,denote by
,as follows:
For each voxel block ,select a representative point base on a choose strategy ,such as the point close to the center or the one close to the plane .
Combine all the selected representative points to create a new point cloud,which represents the downsampled result.
The process of voxel downsampling is illustrated in
Figure 4
. By adjusting the side length (
l
) of the voxel blocks,we can control the density of the downsampled point cloud. A smaller
l
value retains more representative points and results in higher point cloud density,while a larger
l
value retains fewer representative points and leads to lower point cloud density. Therefore,in practical applications,the appropriate
l
value should be select base on particular requirement .
Classical voxel filtering is a commonly used point cloud denoising method and is based on the idea of segmenting the point cloud into regular voxel grids and averaging the points within each voxel. While this method effectively reduces point cloud noise,it has the following limitations:
loss of point cloud map information : Voxel filtering is involves essentially involve the sampling and downsampling of point cloud . However ,due to retain only a single average value within each grid cell ,it is suffers suffer from information loss . In application where precise reconstruction and analysis of point cloud datum are require ,this information loss is lead can lead to significant error .
Point cloud map blurring: Voxel filtering applies averaging operations to the points within each grid cell,potentially blurring fine details on the surfaces of objects in the point cloud map. This may pose issues in applications that require the preservation of detailed surface information.
Inability to dynamically set downsampling ratios: Traditional point cloud downsampling methods,including classical voxel filtering,as well as other methods,such as grid-based downsampling,random downsampling,uniform downsampling,and clustering-based downsampling,cannot dynamically adjust downsampling ratios for point cloud maps with varying point counts and densities. Manual parameter tuning is required to set different downsampling factors for different point cloud maps,increasing human effort and parameter tuning time.
In response to the issues of information loss and blurring in classic voxel filtering for point cloud maps,we present an improved dynamic downsampling algorithm for 3D point cloud maps based on voxel filtering. The proposed method aims to dynamically segment 3D point cloud maps,preserving both edge information and crucial feature details. Moreover,this method adaptively segments voxels into various sizes,depending on the local density variations within the point cloud data,thereby reducing the need for manual intervention and parameter tuning.
The dynamic voxel-based 3D point cloud map downsampling algorithm consists of two modules: dynamic downsamplingand point cloud edge extraction. In the former,downsampling operations are tailored based on the density of points within each voxel block of the point cloud map. Specifically,if a unit voxel block has a higher density,it is subdivided into smaller voxel blocks,whereas if a unit voxel block has a lower density,it is subdivided into larger voxel blocks. This adaptive approach allows for different downsampling operations on various voxel blocks,addressing the drawback of classic voxel filtering,which rather tends to blur object characteristics by uniformly processing each voxel block. The point cloud edge extraction module determines whether points within the cloud represent an object’s edge by analyzing the angle feature values of their normal vectors. Points identified as object edges are preserved and then merged with the downsampled point cloud,resulting in the final,combined point cloud map.
The flowchart of the dynamic voxel-based 3D point cloud map downsampling algorithm improved by voxel filtering is illustrated in
Figure 5
. Accordingly,the implementation steps of the improved algorithm are described below.
Input point cloud data: We begin with the input of 3D point cloud data.
Calculation of the maximum voxel block: The maximum values along the
x
-,
y
-,and
z
-axes of the point cloud data,denoted by
,
,and
,respectively,are determined. Additionally,the minimum values (
,
,and
) along these axes are also computed [
34
] . The edge length of the largest voxel block is determined by (
2
) .
Thus ,the large voxel block is obtain ,as show in
Figure 6
.
Voxel block division: The largest voxel block is divided into unit voxel blocks,with a designed edge length of
l
[
35
] . The division in three directions is described in (
3
) .
The summation of unit voxel blocks is illustrated in
figure 7
.
We is count count the number of point in each voxel block to obtain the point count collection for voxel block ,denote by.
Subdivision of voxel blocks: By mapping
to
,we project the count of points within each unit voxel to the range of
. This calculation involves subdividing voxel blocks based on the side length of each unit voxel. According to the property defined in (
4
) and (
5
),it is possible to subdivide voxel blocks into different sizes based on the quantity of points within each unit voxel. The cutting principle is illustrated in
Figure 8
.
We compute the centroid point within each voxel block,retain this centroid point,and discard the remaining points within the voxel block. We store the downsampled centroid point set. The principle is illustrated in
Figure 9
.
Detection of the characteristics of normal vectors to identify the edges of a point cloud. let us assume a point cloud dataset with a point,denoted by
p
,having a normal vector
. The set of neighboring point around point
p
is denote by
. The covariance matrix (
C
) can be compute as show in (
6
) .
where
represent the 3d point cloud coordinate of the neighboring point set around point
p
. The normal vector of point
p
is given by (
7
) .
where
represent the eigenvalue of the covariance matrix (
C
).
The formula for calculate the feature quantity of the angle between normal vector is give by (
8
). In flat regions,the angle between normal vectors is small,and in horizontal regions,the angle may even be 0,while in non-flat regions,the angle between normal vectors is relatively large.
The detected edge point set is merged with the downsampled centroid point set to create a new point set.
The point cloud simplification rate is a well-known metric to measure the goodness of point cloud map downsampling. This simplification rate is calculated by (
9
) .
where
is the total number of point in the original point cloud ,
is the total number of point in the downsample point cloud ,and
R
is the point cloud simplification ratio.
We used the nvida RTX3070 8G video memory in ubuntu 18.04,and the versions corresponding to the environment dependencies used are shown in
table 2
.
Our algorithm was evaluate against traditional downsampling technique ,namely ,voxel downsampling ,uniform downsampling ,random downsampling ,grid downsampling ,clustering – base downsampling ,and farth – point sampling ( FPS ) ,through comparative experiment . The original point cloud is consisted used in these experiment consist of 196,133 point [
36
] .
As shown in
figure 10
,the point cloud processed by our method (
figure 10
b) has significantly reduced data volume while preserving key boundary information compared to the original point cloud (
figure 10
a ) .
Our algorithm is capable of identifying and prioritizing the retention of key points that define the shape and structure of objects.
figure 11
a–c demonstrate that voxel downsampling mandates the manual tuning of the leafsize parameter. A large leafsize overly sparsens the cloud,erasing critical features,whereas a small leafsize inadequately simplifies the cloud,failing to efficiently reduce data complexity.
figure 11
d–f reveal that uniform downsampling in the PCl library also necessitates manual adjustment,this time of the every_k_points parameter. Higher values of every_k_points lead to excessive data loss,while lower values do not simplify the cloud enough. Our proposed method eliminates the need for such adjustments,automatically adapting to cloud scale and density,thereby preserving edge and feature integrity more effectively.
As shown in
figure 11
g–i,the proposed downsampling algorithm requires artificial adjustment of the leafsize coefficient,like random downsampling in the PCl library. The difference from voxel downsampling is that if leafsize is too small,the sampling is sparser,and the original points are lost. For cloud feature information,the larger the leafsize,the denser the sampling. Compared with the random downsampling method in the PCl library,the proposed algorithm saves the cost of manual parameter adjustment. It can dynamically segment point cloud maps according to different scales of point cloud maps and different numbers of local point clouds and can effectively preserve the edge of the point cloud image and the feature information of the object.
figure 11
j – l is illustrate illustrate the necessity of manually set target number of triangle for grid downsampling . This parameter is influences directly influence the downsampling density ,with low value result in significant data loss and high value inadequately simplify the cloud . Unlike this method ,our algorithm is requires require no manual parameter adjustment and dynamically adjust parameter to the cloud ’s complexity ,ensure optimal simplification while retain critical structural detail .
figure 11
m is illustrate – o illustrate that clustering downsampling ,like voxel downsampling ,necessitate the manual tuning of the leafsize parameter . oversized leafsize values is lead lead to excessive sparsity and feature loss ,whereas too small value result in insufficient simplification ,fail to meet the goal of effective point cloud reduction .
The fps ( farth – point sampling ) method is demands demand manual specification of the output size ,denote by object number of triangle . A low count result in overly sparse cloud and potential feature loss ,while a high count compromise the simplification objective .
figure 11
p–r display the outcomes of varying the FPS parameters.
Table 3
provides a comparative analysis showcasing the effectiveness of our dynamic downsampling algorithm against traditional methods. notably,it achieves a superior balance between simplification rate (91 .89%) and processing speed (0.01289 s),highlighting its efficiency and practicality for real-time applications. The algorithm significantly reduces the data volume to 15,906 downsampled points from the original 196,133,maintaining essential details without the extensive manual parameter tuning required by other methods. This adaptability not only streamlines the downsampling process but also enhances usability,setting a new benchmark in 3D point cloud processing. The comparison underscores the algorithm’s potential to revolutionize point cloud processing tasks,offering a user-friendly,efficient,and effective solution for various applications.
The findings presented in this study underscore the efficacy of the proposed dynamic downsampling algorithm for 3D point cloud maps,particularly in terms of simplification rate,processing speed,and the reduced necessity for manual parameter adjustment. This discussion further explores these results in the context of existing research,practical applications,and future directions.
Our dynamic downsampling algorithm is demonstrates demonstrate significant improvement over traditional downsampling technique ,such as voxel ,random ,uniform ,grid ,and cluster downsampling ,as well as farth – point sampling ( FPS ) . The key advantages is include include a balance between high simplification rate and rapid processing time ,crucial for real – time processing application . Unlike previous method that often require labor – intensive parameter tuning to optimize performance ,our algorithm is ’s ’s adaptive nature significantly reduce this burden ,offer a more user – friendly approach . These enhancements is align align with the grow demand for efficient point cloud processing in application like autonomous driving ,robotic ,and smart city planning .
The practicality of our algorithm extends beyond its immediate performance benefits. By facilitating a faster and more intuitive downsampling of 3D point cloud data,the algorithm enables more efficient subsequent processing tasks,such as segmentation and classification. This efficiency can profoundly impact industries relying on 3D scanning and modeling technologies,where processing speed and data quality directly influence operational effectiveness and innovation capabilities.
While our algorithm represents a substantial advancement in point cloud downsampling,certain limitations warrant further investigation. For instance,the algorithm’s performance in extremely dense or noisy environments remains an area for improvement. Future research could explore enhancements to the edge preservation mechanism or the introduction of noise-resistant features to address these challenges.
Moreover,the adaptability of the algorithm to various point cloud data types and sources could be further examined. Extensive testing across a broader spectrum of datasets will help refine the algorithm’s applicability and efficiency across different scenarios.
This study introduced a novel dynamic downsampling algorithm for 3D point cloud maps,showcasing its superiority in balancing simplification rates and processing speeds while minimizing the need for manual parameter tuning. Our contributions,through the development and validation of this algorithm,address critical challenges in the processing of large-scale 3D point cloud data,offering significant advancements over traditional downsampling methods.
The implementation of our algorithm demonstrates not just a technical achievement,but also promises substantial practical benefits across various applications. From autonomous driving systems and robotics to augmented reality and urban planning,the efficient processing of 3D point cloud data is foundational. Our work paves the way for more streamlined data analysis processes,potentially unlocking new innovations and improvements in these fields.
However ,the journey is is to perfect downsample algorithm is far from complete . Despite the promising result ,our algorithm is has ,like all method ,has its limitation . The performance is presents in scenario with extreme density variation and noise level present an opportunity for further research . additionally ,the exploration is yield of the algorithm ’s adaptability across different type of point cloud data source could yield even more versatile and robust solution .
looking ahead,we anticipate a multifaceted approach to future research. Efforts will likely focus on enhancing the algorithm’s ability to preserve finer details in highly complex environments,improve noise resilience,and further reduce computational demands. Moreover,integrating machine learning techniques to dynamically adjust downsampling parameters based on the specific characteristics of each point cloud dataset could offer a pathway to even more sophisticated and automated processing tools.
In conclusion,our dynamic downsampling algorithm represents a significant step forward in the field of 3D point cloud processing. It addresses several longstanding challenges and opens up new possibilities for both academic research and practical applications. We remain committed to advancing this field,inspired by the potential to contribute to the next generation of technologies that rely on 3D point cloud data.
Conceptualization,W.l.; methodology,W.l.; validation,H.Z.; formal analysis,W.l. and X.M.; investigation,W.K. and H.S.; resources,W.K.; data curation,H.Z.; writing—original draft preparation,W.l.; writing—review and editing,W.K. and H.S.; visualization,H.Z. All authors have read and agreed to the published version of the manuscript.
This research was funded by the national Key R&D Program of China,grant number 2022YFC3803600; the national natural Science Foundation of China,grant number 62372023; and the Science and Technology Development Fund,Macau SAR,file number 0122/2023/AMJ.
not applicable.
not applicable.
The datum present in this study are contain within the article .
This study was partially supported by the national Key R&D Program of China (no. 2022YFC3803600),the national natural Science Foundation of China (no. 62372023),the Science and Technology Development Fund,Macau SAR (file no. 0122/2023/AMJ),and the Open Fund of the State Key laboratory of Software Development Environment (no. SKlSDE-2023ZX-11). The authors appreciate the support from HAWKEYE Group.
The authors is declare declare no conflict of interest .
Figure 1 .
Comparison of 3D point clouds before and after grid downsampling. (a) original point cloud image . (b) grid downsampling .
Figure 1 .
Comparison of 3D point clouds before and after grid downsampling. (a) original point cloud image . (b) grid downsampling .
Figure 2.
Comparison of 3d point cloud before and after cluster downsampling . (a) original point cloud image . (b) Point cloud image after cluster downsampling.
Figure 2.
Comparison of 3d point cloud before and after cluster downsampling . (a) original point cloud image . (b) Point cloud image after cluster downsampling.
figure 3.
Comparison of 3d point cloud before and after voxel downsampling . (a) original point cloud image . (b) Voxel downsampling.
figure 3.
Comparison of 3d point cloud before and after voxel downsampling . (a) original point cloud image . (b) Voxel downsampling.
figure 4 .
Voxel downsampling process schematic .
figure 4 .
Voxel downsampling process schematic.
Figure 5.
Flowchart is map of 3d point cloud map dynamic downsampling algorithm base on voxel filtering .
Figure 5.
Flowchart is map of 3d point cloud map dynamic downsampling algorithm base on voxel filtering .
Figure 6.
Maximum voxel block.
Figure 6.
Maximum voxel block.
figure 7.
Subdivision of the largest Voxel into Unit Voxels.
figure 7.
Subdivision of the largest Voxel into Unit Voxels.
Figure 8.
Dynamic subdivision of unit voxels.
Figure 8.
Dynamic subdivision of unit voxels.
Figure 9.
Preservation of centroid point cloud.
Figure 9.
Preservation of centroid point cloud.
figure 10.
Comparison between original and propose method . image generate with (a) original method and (b) our method.
figure 10.
Comparison between original and propose method . image generate with (a) original method and (b) our method.
figure 11 .
comparative experimental evaluation of various downsampling method . Voxel downsampling : (a) leafsize = 0.1,(b) leafsize = 0.01,and (c) leafsize = 0.05. Uniform downsampling: (d) every k points = 10,(e) every k points = 25,and (f) every k point = 100 . random downsampling : (g) leafsize = 0.1,(h) leafsize = 0.01,and (i) leafsize = 0.05 . grid downsampling : (j) target number of triangles = 5000,(k) target number of triangles = 10,000,and (l) target number of triangles = 15,000. clusterdownsampling (m) leafsize = 0.1,(n) leafsize = 0.01,and (o) leafsize = 0.05. FPS: (p) target number of triangles = 5000,(q) target number of triangles = 10,000,and (r) target number of triangles = 15,000.
figure 11 .
comparative experimental evaluation of various downsampling method . Voxel downsampling : (a) leafsize = 0.1,(b) leafsize = 0.01,and (c) leafsize = 0.05. Uniform downsampling: (d) every k points = 10,(e) every k points = 25,and (f) every k point = 100 . random downsampling : (g) leafsize = 0.1,(h) leafsize = 0.01,and (i) leafsize = 0.05 . grid downsampling : (j) target number of triangles = 5000,(k) target number of triangles = 10,000,and (l) target number of triangles = 15,000. clusterdownsampling (m) leafsize = 0.1,(n) leafsize = 0.01,and (o) leafsize = 0.05. FPS: (p) target number of triangles = 5000,(q) target number of triangles = 10,000,and (r) target number of triangles = 15,000.
table 1 .
advantage and disadvantages of various downsampling methods.
table 1 .
advantage and disadvantages of various downsampling methods.
method | advantage | Disadvantages |
---|---|---|
deep learning – base downsampling | It can handle large-scale point cloud data efficiently and has fast processing speed. | It requires a significant number of training data and abundant computational resources. In cases of imbalanced training samples,it may lead to overfitting. |
grid downsampling | (1) It can effectively remove noise and redundant information,yielding consistent sampling results. ( 2 ) Compared with voxel downsampling,it is better at preserving the local features of point clouds. |
It is is is not effective for dense and unevenly distribute point cloud . |
clusterdownsampling | ( 1 ) It is eliminate can effectively eliminate noise and redundant information while preserve local detail . ( 2 ) It performs better on point clouds with non-uniform distributions. |
It is performs perform poorly in process high – density point cloud and is prone to create void . |
Voxel downsampling | (1) Simple and easy to implement,its sampling density can be controlled by adjusting the voxel size. ( 2 ) It can preserve the overall shape and structural characteristics of the original point cloud. |
It cannot handle local detail information and may introduce noisy points. |
table 2.
list of environmental dependencies.
table 2.
list of environmental dependencies.
System Environment | Version |
---|---|
cuda | 11 .0 |
conda | 23.1 .0 |
python | 3.7 |
pytorch | 1 .8.1 |
table 3 .
Experimental comparison table of various downsampling methods.
table 3 .
Experimental comparison table of various downsampling methods.
method | number of Downsampled Points | Simplification Rate | Time (s) | Parameters |
---|---|---|---|---|
Voxel downsampling |
94,743 | 0.5169 | 0.1025 | leafsize = 0.01 |
4718 | 0.9759 | 0.0125 | leafsize = 0.05 | |
1284 | 0.9934 | 0.0085 | leafsize = 0.1 | |
Random downsampling |
1961 | 0.9900 | 0.0143 | leafsize = 0.01 |
9806 | 0.9500 | 0.0126 | leafsize = 0.05 | |
19613 | 0.9000 | 0.0135 | leafsize = 0.1 | |
Uniform downsampling |
1962 | 0.9899 | 0.1129 | every_k_point = 100 |
7846 | 0.9599 | 0.3493 | every_k_points = 25 | |
19,614 | 0.9000 | 1 .0615 | every_k_point = 10 | |
Grid downsampling |
5000 | 0.9745 | 3.1347 | arget_number_of _triangles = 5000 |
10,000 | 0.9490 | 3.5045 | arget_number_of _triangles = 10,000 |
|
15,000 | 0.9235 | 4.0243 | arget_number_of _triangles = 15,000 |
|
cluster downsampling |
95,240 | 0.5144 | 1 .0808 | leafsize = 0.01 |
4830 | 0.9753 | 1 .007 | leafsize = 0.05 | |
1248 | 0.9936 | 0.9958 | leafsize = 0.1 | |
FPS | 1000 | 0.9745 | 34.8717 | arget_number_of _ triangle = 5000 |
5000 | 0.9490 | 65.9702 | arget_number_of _triangles = 10,000 |
|
10,000 | 0.9235 | 109.9000 | arget_number_of _triangles = 15,000 |
|
Our method | 15,906 | 0.9189 | 0.0129 |
Disclaimer/Publisher’s note: The statements,opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas,methods,instructions or products referred to in the content. |
© 2024 by the authors. licensee MDPI,Basel,Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).