Document
[2106.04180] Image2Point: 3D Point-Cloud Understanding with 2D Image Pretrained Models

[2106.04180] Image2Point: 3D Point-Cloud Understanding with 2D Image Pretrained Models

[Submitted on 8 Jun 2021 (v1) , last revise 23 Apr 2022 ( this version , v3 ) ] Title:Image2Point: 3D Point-Cloud Understanding with 2D Image Pretrai

Related articles

How to Get TikTok Unblocked on My School Computer Top 20 Best Free VPN for iPhone, iPad, and iOS in 2024 Is Your McAfee VPN Not Working? Here’s How to Fix It Best VPN service 2024: 9 options our experts recommend How to Secure 802.1X for Remote Workers

[Submitted on 8 Jun 2021 (

v1

) , last revise 23 Apr 2022 ( this version , v3 ) ]

Title:Image2Point: 3D Point-Cloud Understanding with 2D Image Pretrained Models

view a pdf of the paper title image2point : 3d Point – Cloud Understanding with 2D Image Pretrained Models , by Chenfeng Xu and 9 other author

View PDF

Abstract:3D point-clouds and 2D images are different visual representations of the physical world. While human vision can understand both representations, computer vision models designed for 2D image and 3D point-cloud understanding are quite different. Our paper explores the potential of transferring 2D model architectures and weights to understand 3D point-clouds, by empirically investigating the feasibility of the transfer, the benefits of the transfer, and shedding light on why the transfer works. We discover that we can indeed use the same architecture and pretrained weights of a neural net model to understand both images and point-clouds. Specifically, we transfer the image-pretrained model to a point-cloud model by copying or inflating the weights. We find that finetuning the transformed image-pretrained models (FIP) with minimal efforts — only on input, output, and normalization layers — can achieve competitive performance on 3D point-cloud classification, beating a wide range of point-cloud models that adopt task-specific architectures and use a variety of tricks. When finetuning the whole model, the performance improves even further. Meanwhile, FIP improves data efficiency, reaching up to 10.0 top-1 accuracy percent on few-shot classification. It also speeds up the training of point-cloud models by up to 11.1x for a target accuracy (e.g., 90 % accuracy). Lastly, we provide an explanation of the image to point-cloud transfer from the aspect of neural collapse. The code is available at: \url{this https URL}.

submission history

From: Chenfeng Xu [

view email

]

[v1]

Tue, 8 Jun 2021 08:42:55 UTC (3,864 KB)

[v2]

Thu, 21 Apr 2022 08:30:25 UTC (1,775 KB)

[v3]

Sat, 23 Apr 2022 20:15:14 UTC (1,775 KB)