高宇森, 高楠, 倪育博, 孟召宗, 邵金凤, 张宗华. 全方位相机与转轴位姿标定方法研究[J]. 红外与激光工程, 2023, 52(8): 20230425. DOI: 10.3788/IRLA20230425
引用本文: 高宇森, 高楠, 倪育博, 孟召宗, 邵金凤, 张宗华. 全方位相机与转轴位姿标定方法研究[J]. 红外与激光工程, 2023, 52(8): 20230425. DOI: 10.3788/IRLA20230425
Gao Yusen, Gao Nan, Ni Yubo, Meng Zhaozong, Shao Jinfeng, Zhang Zonghua. Research on pose calibration method for omnidirectional camera and rotation axis[J]. Infrared and Laser Engineering, 2023, 52(8): 20230425. DOI: 10.3788/IRLA20230425
Citation: Gao Yusen, Gao Nan, Ni Yubo, Meng Zhaozong, Shao Jinfeng, Zhang Zonghua. Research on pose calibration method for omnidirectional camera and rotation axis[J]. Infrared and Laser Engineering, 2023, 52(8): 20230425. DOI: 10.3788/IRLA20230425

全方位相机与转轴位姿标定方法研究

Research on pose calibration method for omnidirectional camera and rotation axis

  • 摘要: 在利用相机对物体进行感知的过程中,标定相机与外界参考系之间的位姿是测量中至关重要的一环,能否精准确定二者之间的外参决定着最终三维信息质量的好坏。然而在传统的测量系统当中,测量物体尺寸受限于针孔相机视场的大小。相比较而言,全方位相机具有视场广、成像质量高等优点,常配合旋转系统进行全景图像生成,也可配合激光雷达点云生成场景全景模型,为目前大场景测量的主流方向。就目前存在于相机与旋转系统之间外参标定的复杂问题,文中提出了一种针对全方位相机与旋转轴之间的标定方法。该方法通过使用全方位相机,对两个二维码棋盘格进行旋转拍摄,同时通过理论推导建立起可靠的数学模型作为原理支撑,并利用光束法平差,对基于解析法获得的初步结果进行再次优化,从而获得更精确的外参估计。该方法对设备安装精度要求不高,只需在相机视场范围内,考虑标定板与全方位相机之间的摆放位置即可。实验结果表明,该方法经过优化后的平均重投影误差可控制在0.15 pixel以下,满足实验测量要求,并在不同场景下展现了良好的应用效果。

     

    Abstract:
      Objective  Accurate determination of camera-to-reference frame parameters is crucial. Traditional systems are limited by camera field of view, constraining object size measurability. Omnidirectional cameras offer wide view and high imaging quality by using rotation systems or combining with LiDAR for scene modeling. This paper proposes a calibration method for omnidirectional camera and rotation axis. It uses omnidirectional cameras to capture rotation of QR code chessboards. A reliable mathematical model and nonlinear fitting optimize initial results for accurate parameter estimation. This method has low equipment requirements, considering board placement within the camera's view. The experimental results indicate that the average optimized reprojection error of this method can be controlled below 0.15 pixel, satisfying the requirements of experimental measurements and demonstrating promising application performance in various scenarios.
      Methods  A reliable system is proposed to calibrate the extrinsic parameters between the camera and the rotation axis. A omnidirectional camera with a resolution of 4 000 pixel×3 000 pixel is utilized to capture the dual ChArUco calibration boards (Fig.2). For the extrinsic calibration, an algorithm is designed to fit the rotation plane and different methods for establishing the axis coordinate system are introduced (Fig.5). The accuracy of the system is evaluated using the distance from the optical center to the origin of the axis coordinate system (Fig.9) and the reprojection errors under different conditions (Fig.11).
      Results and Discussions   In this method, the Perspective-n-Point algorithm is employed to determine the camera's optical center coordinates. Subsequently, a nonlinear least squares fitting technique is applied to fit the rotation plane and sphere of the optical center (Fig.8). The circularity fitting standard deviation for the intersection between the plane and the sphere is measured to be 0.021 8 mm, while the flatness fitting standard deviation is 0.030 1 mm. The range of distances from the camera's optical center to the axis is found to be 0.085 mm, with a standard deviation of 0.021 mm (Fig.9). Additionally, the maximum reprojection error between the experimental reference group and the other two control groups is 0.141 6 pixel (Fig.12), thereby validating the accuracy of the proposed method.
      Conclusions  To address the issue of pose uncertainty between the camera and the rotation axis, this paper proposes a calibration method based on a omnidirectional camera and dual ChArUco calibration boards. The method captures multiple sets of images containing the dual targets to obtain the position information of the camera's optical center at each shooting position. By establishing a mathematical model for coordinate system transformation, the pose relationship between the camera and the rotation axis is computed and optimized, effectively suppressing the influence of random errors in the experiments. Experimental results demonstrate that the proposed method achieves sub-millimeter-level accuracy in the distance between the camera and the rotation axis, with an average optimized reprojection error controlled below 0.15 pixel. Compared to other methods, the method presented in this paper has lower system complexity, improved accuracy by use of two calibration boards, and effectively mitigates random errors caused by placement variations. The results indicate that this method exhibits good robustness and convenience, making it reliably applicable to shooting tasks in diverse scenarios.

     

/

返回文章
返回