Objective In the three-dimensional reconstruction of surface structured light, due to the limitation of the reconstruction depth of field, the problem of reconstruction error occurs when the measured object exceeds the reconstruction depth of field. In the large scene where the longitudinal shooting range is required, a single shot cannot meet the reconstruction requirements. It is necessary to move the structured light system along the longitudinal direction and re-calibrate, which increases the complexity and repeatability of the task. In this paper, a depth-of-field extended 3D reconstruction technology based on auxiliary camera is proposed, and the objects inside and outside the reconstructed depth of field are reconstructed adaptively with the help of phase threshold.
Methods The absolute phase is obtained by the combination of four-step phase shift and complementary gray code, and the camera and projector are calibrated by polynomial fitting method. With the help of the depth of field calculation model, the depth of field range of the main camera and the auxiliary camera is calculated and the device is fixed according to the specific position (Fig.2). The auxiliary camera is used to obtain the pixel coordinates of the calibration plate beyond the depth of field reconstructed by the main camera. The joint calibration results of the binocular camera are used to convert it into the main camera coordinate system. Combined with the phase value obtained by the main camera, the phase-height mapping relationship beyond the depth of field reconstructed by the main camera is established (Fig.4). Based on the traditional phase method model, the relationship between phase and shooting distance is quantitatively analyzed, and the phase adaptive threshold is proposed.
Results and Discussions Three-dimensional reconstruction is carried out by using different mapping relationships for four different objects to be measured (Fig.7). The effect pictures of the traditional method (Fig.7(d)) and the method in this paper (Fig.7(e)) are shown respectively, and the reconstruction contrast effect is obvious. In this paper, the three-dimensional reconstruction of the object to be measured inside and outside the reconstructed depth of field is carried out by establishing a global mapping. The reconstruction effect is not as good as the local corresponding mapping reconstruction effect (Fig.8), so the phase adaptive threshold is introduced. When the system is calibrated, the specific order of the calibration number and the shooting distance is strictly followed to verify the linear relationship between the phase and the shooting distance (Fig.9), which proves the correctness of the adaptive threshold.
Conclusions The maximum error between point cloud data and real data is 0.041 mm by three-dimensional reconstruction of the inner step of the main camera depth of field. With the help of the auxiliary camera, the mapping relationship beyond the reconstructed depth of field of the main camera is established. Based on the depth of field calculation model, the reconstructed depth of field range in this scene is quantitatively calculated. Experiments show that this method can improve the reconstructed depth of field range by about 50%, which greatly improves the reconstruction range of surface structured light.