Abstract:
3D point cloud data processing has played an essential role in object segmentation, medical image segmentation, and virtual reality. However, the existing 3D point cloud learning network has a small global feature extraction range and cannot obtain local high-level semantic information, which leads to incomplete point cloud feature representation. Aiming at these problems, a classification, and segmentation network of object point cloud based on semantic information compensating global features was proposed. Firstly, align the input point cloud data to the specification space, and perform the preprocessing of the input conversion of the data. Then, the expanded edge convolution module was used to extract the features of each layer of the converted data and superimpose them to generate global features. In the local feature extraction, the extracted low-level semantic information was used to describe the high-level semantic features and effective geometric information, which was used to compensate for the missing point cloud features in the global features. Finally, the global feature and local high-level semantic information were combined to obtain the overall feature of the point cloud. The experimental results show that the method in this paper is superior to the current classic and novel algorithms in classification and segmentation performance.