-
文中通过利用U-Net网络直接从高速场景下采集到的条纹图中来提取被测动态物体的相位信息。由于在高速场景下图像采集的速度快、时间短,因此采集到的条纹图像往往携带着严重的噪声干扰。
作为文中深度学习网络的输入,拍摄得到的条纹噪声图像可表示为:
$$ I({a}^{c},{b}^{c})\text=A({a}^{c},{b}^{c})\text+B({a}^{c},{b}^{c})\text{cos}\left[\varphi \right({a}^{c},{b}^{c}\left)\right] $$ (1) 式中:$ I $为被相机记录的条纹噪声图像;$ ({a}^{c},{b}^{c}) $为高速相机的像素坐标;$ A({a}^{c},{b}^{c}) $为平均光强图;$ B({a}^{c},{b}^{c}) $为振幅强度图;$ \varphi ({a}^{c},{b}^{c}) $为待测物体的绝对相位图。可以通过最小二乘法[12]将公式(1)中待测物体的包裹相位表示为:
$$ {\varphi \left({a}^{c},{b}^{c}\right) = \mathit{\rm arctan}\dfrac{M\left({a}^{c},{b}^{c}\right)}{D\left({a}^{c},{b}^{c}\right)} = \text{arctan}\dfrac{\rho B({a}^{c},{b}^{c})\mathit{\rm sin}(\varphi \left({a}^{c},{b}^{c}\right)}{\rho B\left({a}^{c},{b}^{c}\right)\mathit{\rm cos}(\varphi \left({a}^{c},{b}^{c}\right)}}$$ (2) 公式(2)将待测物体的包裹相位转化为一个反正切函数,其中$ M\left({a}^{c},{b}^{c}\right) $和$ D\left({a}^{c},{b}^{c}\right) $分别代表着反正切函数中的分子项与分母项,$ \;\rho $代表一个相位解包裹相关的常数。
由于反正切函数的特性,得到的包裹相位的数值范围为$ [-\pi ,\pi ] $,并且有着$ \text{2}\pi $的不连续跳变。实际上在深度学习的过程中,运用U-Net网络来输出被测物体相位的分子项与分母项替代直接预测输出物体的相位信息。这样做的好处是可以绕开再现$ \text{2}\pi $突变的困难,进而使网络能够预测输出高质量的相位。在网络中得到分子和分母项结合反正切函数后可以得到被测物体的包裹相位,为了进一步得到绝对相位,需要通过相位展开算法将得到的包裹相位进行一个“解包裹”的过程,将其转换为绝对相位$ \varPhi ({a}^{c},{b}^{c}) $来获得连续分布的相位信息。关于被测物体绝对相位和包裹相位的关系可表示为:
$$ \varPhi ({a}^{c},{b}^{c})\text=\varphi ({a}^{c},{b}^{c})\text{+ 2}\pi k({a}^{c},{b}^{c}) $$ (3) 式中:$ \varPhi ({a}^{c},{b}^{c}) $为物体的绝对相位;${k}({a}^{c},{b}^{c})$为条纹级次。
多频时间相位法[13]通过投影多种不同频率的条纹图像到被测物体来进行相位展开。为了提高高速下采集条纹噪声图的效率,投影了基频和高频两个不同频率的光栅图案。由于高频光栅的空间频率对于三维重构结果的精度影响较大,因此为了获得更好的效果,在投影时需要尽可能地提升高频光栅的空间频率。
通过投影仪投影不同频率的条纹图像到待测物体表面,高速相机同步采集得到条纹图,然后通过计算得出高频包裹相位$ {\varphi }_{h}({a}^{c},{b}^{c}) $和低频包裹相位$ {\varphi }_{l}({a}^{c},{b}^{c}) $ [14]。可以结合公式(3)将其表达为:
$$ \varPhi \left( {{a^c},{b^c}} \right){\rm{ = }}\varphi \left( {{a^c},{b^c}} \right){\rm{ + 2}}\pi round\left[ {\frac{{{\varphi _l}\left( {{a^c},{b^c}} \right){f_h}/{f_l} - {\varphi _h}\left( {{a^c},{b^c}} \right)}}{{2\pi }}} \right]{\rm{}} $$ (4) 式中:$ round\left[\right] $为取整函数;$ {f}_{h} $和$ {f}_{l} $分别为高频和低频条纹的条纹频率。所以在已知光栅条纹级次和包裹相位的情况下,就可以计算得到与物体包裹相位相对应的绝对相位。
A learning based on approach for noise reduction with raster images
-
摘要: 基于条纹投影的三维形貌测量广泛应用于工业制造、质量检测、生物医疗、航空航天等领域。然而在高速测量的场景下,由于光栅图像的采集过程曝光时间短,三维重建结果通常会受到较为严重的图像噪声干扰。近年来,深度学习技术在计算机视觉等领域得到了广泛应用,并且取得了巨大的成功。受此启发,提出了一种基于学习的光栅图像噪声抑制方法。首先构建了一个基于U-net的卷积神经网络。其次在训练过程中,构建的神经网络学习从含有噪声的条纹图像到对应高质量包裹相位之间的映射关系。当经过适当训练,该网络可从含有噪声的条纹图像中准确恢复相位信息。实验结果表明:针对离线的快速运动场景三维测量,该方法仅利用一幅光栅图像可恢复高精度的相位信息,且相位精度优于传统的三步相移方法。该方法可为提升运动高速场景三维测量的精度提供切实可靠的解决方案。Abstract: Three-dimensional (3D) shape measurement based on fringe projection was widely used in industrial manufacturing, quality testing, biomedicine, aerospace and other fields. However, due to the short exposure time of raster images acquisition process, 3D reconstruction results were usually affected by serious image noise in the scene of high-speed measurement. In recent years, deep learning has been widely used in computer vision and other fields, and has achieved great success. Inspired by this, we proposed a learning based approach for noise reduction with raster images. Firstly, we constructed a convolutional neural network based on U-NET. Secondly, the neural network was constructed to learn the mapping relationship between the noisy fringe images and the corresponding high quality wrapped phase during the training process. With proper training, this network can accurately recovered phase information from noisy fringe images. Aiming at off-line 3D measurement in fast moving scene, experimental results show that the proposed method can recover high-precision phase information by using only one raster image, and the phase accuracy is better than the traditional three-step phase shift method. This method can provide a practical and reliable solution for improving the accuracy of 3D measurement in high-speed scene.
-
Key words:
- high-speed 3D shape measurement /
- noisy fringe images /
- deep learning /
- phase recovery
-
图 2 高频条纹噪声图像恢复过程中的不同阶段。(a) 玩具猫的高频噪声条纹图像;(b) 组合物体的高频噪声条纹图像;(c) 玩具猫的反正切函数分子;(d) 组合物体的反正切函数分子;(e) 玩具猫的反正切函数分母;(f) 组合物体的反正切函数分母;(g) 玩具猫的绝对相位;(h) 组合物体的绝对相位
Figure 2. Different stages of high-frequency noisy fringe images restoration. (a) High-frequency noisy fringe image of the doll cat; (b) High-frequency noisy fringe image of the combined object; (c) The arc tangent function numerator of the doll cat; (d) The arc tangent function numerator of the combined object; (e) The arc tangent function denominator of the doll cat; (f) The arc tangent function denominator of the combined object; (g) The absolute phase of the doll cat; (h) The absolute phase of the combined object
图 3 深度学习方法与传统方法的相位误差对比。(a) 传统三步相移方法下恢复的玩偶猫的绝对相位与真实值误差;(b) 深度学习方法下恢复的玩偶猫的绝对相位与真实值误差;(c) 两种方法下恢复的玩偶猫的绝对相位与真实值的误差曲线图;(d) 传统三步相移方法下恢复的组合物体的绝对相位与真实值误差;(e) 深度学习方法下恢复的组合物体的绝对相位与真实值误差;(f) 两种方法下恢复的组合物体的绝对相位与真实值的误差曲线图
Figure 3. Comparison of the phase error between deep learning and traditional method. (a) The error between the true value and the absolute phase of the doll cat recovered by the traditional three-step phase shift method; (b) The error between the true value and the absolute phase of the doll cat using deep learning; (c) The error curve of the true value and the absolute phase of the doll cat recovered by the two methods; (d) The error between the true value and the absolute phase of the combined object recovered by the traditional three-step phase shift method; (e) The error between the true value and the absolute phase of combined objects using deep learning; (f) The error curve of the true value and the absolute phase of the combined object recovered by the two methods
图 4 不同方法下恢复三维效果对比。 (a) 文中方法恢复的玩偶猫的三维重构结果;(b) 传统方法下恢复的玩偶猫的三维重构结果;(c) 玩偶猫的真实三维重构结果;(d) 文中方法下恢复的组合物体的三维重构结果;(e) 传统方法下恢复的组合物体的三维重构结果;(f) 组合物体的真实三维重构结果
Figure 4. Comparison of 3D reconstruction results under different methods. (a) 3D reconstruction result of the doll cat restored by the method in this paper; (b) 3D reconstruction result of dolls cat restored by traditional methods; (c) The true 3D reconstruction result of the doll cat; (d) 3D reconstruction result of composite objects recovered by the method in this paper; (e) 3D reconstruction result of composite objects recovered by traditional methods; (f) The true 3D reconstruction result of the combined object
图 5 不同速度下风扇三维重构结果对比。(a)一档速(约800 rpm)下采集的风扇图像;(b)传统三步相移法一档速下风扇三维重构结果;(c) 基于U-Net网络一档速下风扇三维重构结果;(d)二档速(约1800 rpm)下采集的风扇图像; (e)传统三步相移法二档速下风扇三维重构结果;(f) 基于U-Net网络二档速下风扇三维重构结果
Figure 5. 3D reconstruction results comparison of fans at different speeds. (a) Fan images collected at first speed (about 800 rpm); (b) 3D reconstruction results of fan at first speed by the traditional three-step phase shift method; (c) 3D reconstruction results of fan at first speed based on U-Net network; (d) Fan images collected at second speed (about 1800 rpm); (e) 3D reconstruction results of fan at second speed with the traditional three-step phase shift method; (f) 3D reconstruction results of fan at second speed based on U-Net network
图 6 精度球三维重构及其分析。(a)左边精度球三维重构结果;(b)左边精度球误差分布;(c)左边精度球误差直方图;(d)右边精度球三维重构结果;(e)右边精度球误差分布;(f)右边精度球误差直方图
Figure 6. 3D reconstruction and analysis of precision sphere. (a) 3D reconstruction result of left precision sphere; (b) Error distribution of left precision sphere; (c) Error histogram of left precision sphere; (d) 3D reconstruction result of right precision sphere; (e) Error distribution of right precision sphere; (f) Error histogram of right precision sphere
-
[1] Gorthi S S, Rastogi P. Fringe projection techniques: Whither we are? [J]. Optics and Lasers in Engineering, 2010, 48: 133-140. [2] Feng S, Chen Q, Gu G, et al. Fringe pattern analysis using deep learning [J]. Advanced Photonics, 2019, 1(2): 025001. [3] Qian K. Two-dimensional windowed Fourier transform for fringe pattern analysis: Principles, applications and implementations [J]. Optics and Lasers in Engineering, 2007, 45(2): 304-317. [4] Xu J, Zhang S. Status, challenges, and future perspectives of fringe projection profilometry [J]. Optics and Lasers in Engineering, 2020, 135: 106193. doi: 10.1016/j.optlaseng.2020.106193 [5] Zhang S. High-speed 3 D shape measurement with structured light methods: A review [J]. Optics and Lasers in Engineering, 2018, 106: 119-131. [6] Feng S, Zuo C, Yin W, et al. Micro deep learning profilometry for high-speed 3 D surface imaging [J]. Optics and Lasers in Engineering, 2019, 121: 416-427. doi: 10.1016/j.optlaseng.2019.04.020 [7] Ma G Q, Liu L, Yu Z L, et al. Application and development of three-dimensional profile measurement for large and complex surface [J]. Chinese Optics, 2019, 12(2): 214-228. (in Chinese) doi: 10.3788/co.20191202.0214 [8] Zhang Q, Wang Q, Hou Z, et al. Three-dimensional shape measurement for an underwater object based on two-dimensional grating pattern projection [J]. Optics& Laser Technology, 2011, 43(4): 801-805. [9] Yin W, Chen Q, Feng S, et al. Temporal phase unwrapping using deep learning [J]. Scientific Reports, 2019, 9(1): 20175. doi: 10.1038/s41598-019-56222-3 [10] Feng S, Zuo C, Yin W, et al. Application of deep learning technology to fringe projection 3 D imaging [J]. Infrared and Laser Engineering, 2020, 49(3): 0303018. (in Chinese) doi: 10.3788/irla.35_2020-12by [11] Zhong J X, Feng S, Yin W, et al. Speckle projection profilometry with deep learning [J]. Infrared and Laser Engineering, 2020, 49(6): 20200011. (in Chinese) doi: 10.3788/irla.8_2020-0011 [12] Malacara D. Optical Shop Testing[M]. New York: John Wiley & Sons, 2007: 59. [13] Zuo C, Huang L, Zhang M, et al. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review [J]. Optics and Lasers in Engineering, 2016, 85: 84-103. doi: 10.1016/j.optlaseng.2016.04.022 [14] Zuo C, Feng S, Huang L, et al. Phase shifting algorithms for fringe projection profilometry: A review [J]. Optics and Lasers in Engineering, 2018, 109: 23-59. doi: 10.1016/j.optlaseng.2018.04.019 [15] Lohry W, Zhang S. High-speed absolute three-dimensional shape measurement using three binary dithered patterns [J]. Optics Express, 2014, 22 (22): 26752-26762. [16] Wu Z, Guo W, Zhang Q. High-speed three-dimensional shape measurement based on shifting Gray-code light [J]. Optics Express, 2019, 27(16): 22631-22644. doi: 10.1364/OE.27.022631 [17] Feng S, Zuo C, Tao T, et al. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry [J]. Optics and Lasers in Engineering, 2018, 103: 127-138. doi: 10.1016/j.optlaseng.2017.12.001 [18] Zhang H, Zhang Q, Li Y, et al. High-speed 3 D shape measurement with temporal Fourier transform profilometry [J]. Applied Sciences, 2019, 9(19): 4123. doi: 10.3390/app9194123 [19] Wang L Z, Wang Y, Liang J, et al. Measurement of full-field strain incell phone dropping test by high-speed 3 D digital image correlation method [J]. Optics and Precision Engineering, 2018, 26(9): 2174-2180. (in Chinese) doi: 10.3788/OPE.20182609.2174