留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Fast measurement of human body posture based on three-dimensional optical information

Song Limei Huang Haozhen Chen Yang Zhu Xinjun Yang Yangang Guo Qinghua

宋丽梅, 黄浩珍, 陈扬, 朱新军, 杨燕罡, 郭庆华. 基于三维光学信息的人体姿态快速测量[J]. 红外与激光工程, 2020, 49(6): 20200079. doi: 10.3788/IRLA20200079
引用本文: 宋丽梅, 黄浩珍, 陈扬, 朱新军, 杨燕罡, 郭庆华. 基于三维光学信息的人体姿态快速测量[J]. 红外与激光工程, 2020, 49(6): 20200079. doi: 10.3788/IRLA20200079
Song Limei, Huang Haozhen, Chen Yang, Zhu Xinjun, Yang Yangang, Guo Qinghua. Fast measurement of human body posture based on three-dimensional optical information[J]. Infrared and Laser Engineering, 2020, 49(6): 20200079. doi: 10.3788/IRLA20200079
Citation: Song Limei, Huang Haozhen, Chen Yang, Zhu Xinjun, Yang Yangang, Guo Qinghua. Fast measurement of human body posture based on three-dimensional optical information[J]. Infrared and Laser Engineering, 2020, 49(6): 20200079. doi: 10.3788/IRLA20200079

基于三维光学信息的人体姿态快速测量

doi: 10.3788/IRLA20200079
详细信息
  • 中图分类号: TP391.41

Fast measurement of human body posture based on three-dimensional optical information

  • 摘要:

    汽车中人体姿态三维测量对汽车座椅设计的舒适性评价具有重要的意义。为了快速准确地获取车内人体三维数据,文中采用一种基于双目视觉的立体三维数据获取方法,该方法将结构光与标记点相结合,实现了人体三维点云快速重建以及三维姿态(距离、角度)自动快速测量。实验结果表明,该方法在距离2 m以上,测量范围1.5 m×2 m时,人体姿态测量精度可以达到0.03 mm,满足了汽车人体姿态高精度三维数据采集的需求。与传统的汽车人体姿态三维测量方法相比,文中所使用的三维自动测量方法不仅自动化程度高,而且具有测量精度高、速度快、鲁棒性强的优点。

  • Figure  1.  Target data (a) distribution of the markers (b) angle calculation diagram

    Figure  2.  Binocular 3D human body scanning system

    Figure  3.  Calibration balls (a) calibration balls used in system measurement accuracy (b) schematic diagram of the spacing of the calibration balls

    Figure  4.  Method comparison (a) traditional TWPSP body scanning schematic (b) improved TWPSP body scanning schematic (c) fitting result of traditional method (d) result of the method in this paper (e) traditional TWPSP method reconstruction chart (f) improved TWPSP method reconstruction chart

    Figure  5.  Comparison of the average values of the three groups of poses

    Table  1.   First five measurements of calibration balls

    Key dimensionsTraditional TWPSP methodMethod in this paper
    ${\hat O_1}{\hat O_2}$/mm149.853 49.970
    250.05350.038
    349.97950.000
    449.95650.009
    550.014 50.024
    Average49.97149.998
    RMSE0.044 0.019
    ${\hat O_{\rm{2}}}{\hat O_{\rm{3}}}$/mm125.096 25.072
    224.975 25.019
    325.048 24.963
    424.981 25.051
    52525.07724.949
    Average25.06125.028
    RMSE0.0530.037
    ${\hat O_1}{\hat O_3}$/mm155.65455.924
    255.64355.934
    355.57255.910
    455.63255.881
    555.61055.875
    Average55.94055.895
    RMSE0.0550.035
    下载: 导出CSV

    Table  2.   Data comparison table of fitting results

    ${\hat d_1}$ /mm${\hat d_{\rm{2}}}$ /mm${\hat d_{\rm{3}}}$ /mm$MA{E_{\rm{1}}}$ /mm$MA{E_{\rm{2}}}$ /mm$MA{E_{\rm{3}}}$ /mm
    Traditional TWPSP method19.94919.96119.9780.0480.0350.039
    Method in this paper19.97919.97819.9880.0280.0180.019
    下载: 导出CSV

    Table  3.   Point cloud data analysis of different sensors

    EquipmentPoint cloud numberPoint cloud densityAccuracy/mmReconstruction time/s
    Artec scanner795 7850.1430.17
    Sense scanner121 7170.0010.95
    Traditional TWPSP method system236 5400.0160.0552
    Scanning system of this paper296 5400.0160.031
    下载: 导出CSV

    Table  4.   Angles of key points (all values are reported as degree)

    Average degree
    of No.1 car
    Average degree
    of No.2 car
    Average degree
    of No.3 car
    RMSE of
    No.1 car
    RMSE of
    No.2 car
    RMSE of
    No.3 car
    Angle A125.102 26.41023.6051.2911.4541.249
    Angle A298.491104.70 598.3705.2305.7755.081
    Angle A360.80664.31365.6543.4263.8043.117
    Angle A4101.50193.416101.3285.9665.3205.313
    Angle A535.54330.67735.3853.9263.1653.381
    Angle A6127.434133.667138.0952.1172.0842.879
    Angle A7154.080141.380144.4008.3448.6618.250
    Angle A841.64732.87543.1926.7756.6256.291
    Angle A913.9359.66114.3993.8643.2973.958
    下载: 导出CSV
  • [1] Kim S H, Choi S G, Choi W K, et al. Pulse electrochemical machining on Invar alloy: Optical microscopic/SEM and non-contact 3D measurement study of surface analyses [J]. Applied Surface Science, 2014, 314: 822−831. doi:  10.1016/j.apsusc.2014.07.028
    [2] Lim H, Li F C, Friedman S, et al. Residual microstrain in root dentin after canal instrumentation measured with digital moiré interferometry [J]. Journal of Endodontics, 2016, 42(9): 1397−1402. doi:  10.1016/j.joen.2016.06.004
    [3] Mao T, Chen Q, He W J, et al. Time-of-Flight camera via a single-pixel correlation image sensor [J]. Journal of Optics, 2018, 20(4): 045609.
    [4] Łabęcki P M, Nowicki, Skrzypczyński P. Characterization of a compact laser scanner as a sensor for legged mobile robots [J]. Management & Production Engineering Review, 2012, 3(3): 45−52.
    [5] Sarbolandi H, Lefloch D, Kolb A. Kinect range sensing: Structured-light versus Time-of-Flight Kinect [J]. Computer Vision & Image Understanding, 2015, 139: 1−20.
    [6] Rubinsztein D H, Forbes A, Berry M V, et al. Roadmap on structured light [J]. Journal of Optics, 2017, 19(1): 013001. doi:  10.1088/2040-8978/19/1/013001
    [7] Xia L, Chen C C, Aggarwal J K. Human detection using depth information by Kinect[C]//2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2011: 15-22.
    [8] Kuehnapfel A, Ahnert P, Loeffler M, et al. Reliability of 3D laser-based anthropometry and comparison with classical anthropometry [J]. Scientific Reports, 2016, 6(1): 26672. doi:  10.1038/srep26672
    [9] Schwarz-Müller Frank, Marshall R, et al. Development of a positioning aid to reduce postural variability and errors in 3D whole body scan measurements [J]. Applied Ergonomics, 2018, 68: 90−100. doi:  10.1016/j.apergo.2017.11.001
    [10] Song L M, Li D P, Chang Y L, et al. Steering knuckle diameter measurement based on optical 3D scanning [J]. Optoelectronics Letters, 2014, 10(6): 473−476. doi:  10.1007/s11801-014-4144-1
    [11] Song L M, Lin W W, Yang Y G, et al. Fast 3D reconstruction of dental cast model based on structured light [J]. Optoelectronics Letters, 2018, 14(06): 457−460. doi:  10.1007/s11801-018-8076-z
    [12] Gutiérrezgarcía J C, Mosino J F, Martínez A, et al. Practical eight-frame algorithms for fringe projection profilometry [J]. Optics Express, 2013, 21(1): 903−917. doi:  10.1364/OE.21.000903
    [13] Zuo C, Feng S, Huang L, et al. Phase shifting algorithms for fringe projection profilometry: A review [J]. Optics and Lasers in Engineering, 2018, 109: 23−59. doi:  10.1016/j.optlaseng.2018.04.019
    [14] Choi S, Takahashi S, Sasaki O, et al. Three-dimensional step-height measurement using sinusoidal wavelength scanning interferometer with four-step phase-shift method [J]. Optical Engineering, 2014, 53(8): 084110.
    [15] Song L M, Dong X X, Xi J T, et al. A new phase unwrapping algorithm based on Three Wavelength Phase Shift Profilometry method [J]. Optics & Laser Technology, 2013, 45(1): 319−329.
    [16] Vo A V, Linh T H, Laefer D F, et al. Octree-based region growing for point cloud segmentation [J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2015, 104: 88−100. doi:  10.1016/j.isprsjprs.2015.01.011
    [17] Bauchau O A, Traineli L. The vectorial parameterization of rotation [J]. Nonlinear Dynamics, 2003, 32(1): 71−92. doi:  10.1023/A:1024265401576
    [18] Wu J L, Tian X L. Image fusion for Mars data using mix of robust PCA [J]. International Journal of Pattern Recognition & Artificial Intelligence, 2017, 31(1): 1754002.
  • [1] 吴海俊, 于丙石, 姜嘉琪, 赵波, CarmeloRosales-Guzmán, 白振旭, 朱智涵, 史保森.  光场空间结构全维度非线性调控理论及应用 . 红外与激光工程, 2023, 52(8): 20230397-1-20230397-5. doi: 10.3788/IRLA20230397
    [2] 杨静雯, 张宗华, 付莉娜, 李雁玲, 高楠, 高峰.  利用抖动算法扩展深度范围的三维形貌测量术 . 红外与激光工程, 2023, 52(8): 20230059-1-20230059-10. doi: 10.3788/IRLA20230059
    [3] 朱坡, 张宗华, 高楠, 高峰, 王张颖.  彩色高反光物体表面三维形貌测量技术 . 红外与激光工程, 2023, 52(7): 20220761-1-20220761-7. doi: 10.3788/IRLA20220761
    [4] Yuan Chunyu, Cao Yang, Deng Yong, Zhang Shulian.  Improving the measurement accuracy of refractive index of GaAs and Sapphire Crystal by laser feedback interferometry . 红外与激光工程, 2022, 51(3): 20210400-1-20210400-7. doi: 10.3788/IRLA20210400
    [5] Wang Yichao, Zhang Zheng, Huang Haizhou, Lin Wenxiong.  Particle auto-statistics and measurement of the spherical powder for 3D printing based on deep learning . 红外与激光工程, 2021, 50(10): 2021G004-1-2021G004-10. doi: 10.3788/IRLA2021G004
    [6] Ma Jinyu, Chen Xin, Ding Guoqing, Chen Jigang.  Research on angle setting error of diameter measurement based on laser displacement sensors . 红外与激光工程, 2021, 50(5): 20200316-1-20200316-7. doi: 10.3788/IRLA20200316
    [7] Yang Xiuwei, Zhang Dehai, Xiao Zhongjun, Li Xiangdong, Zhang Lin.  Effect of surface roughness on THz reflection measurement . 红外与激光工程, 2021, 50(3): 20200209-1-20200209-7. doi: 10.3788/IRLA20200209
    [8] Zhang Bo, Zhang Wengjian, Lei Lihua.  Development of coaxiality measurement system of turbine components and stud standard parts . 红外与激光工程, 2020, 49(10): 20200216-1-20200216-6. doi: 10.3788/IRLA20200216
    [9] Fang Zeyuan, Yin Lu, Yan Mingjian, Han Zhigang, Shen Hua, Zhu Rihong.  Study on signal light transmission efficiency enhancement of backward pump-signal combiners in high-power fiber lasers . 红外与激光工程, 2020, 49(10): 20200014-1-20200014-10. doi: 10.3788/IRLA20200014
    [10] Lu Qiang.  Thermal radiation stray light integration method of infrared camera in geostationary orbit . 红外与激光工程, 2020, 49(5): 20190457-20190457-9. doi: 10.3788/IRLA20190457
    [11] 伏燕军, 韩勇华, 陈元, 张鹏飞, 桂建楠, 钟可君, 黄采敏.  基于相位编码的三维测量技术研究进展 . 红外与激光工程, 2020, 49(3): 0303010-0303010-15. doi: 10.3788/IRLA202049.0303010
    [12] 张启灿, 吴周杰.  基于格雷码图案投影的结构光三维成像技术 . 红外与激光工程, 2020, 49(3): 0303004-0303004-13. doi: 10.3788/IRLA202049.0303004
    [13] Lei Yu, Guo Fang.  Dual-mode camera using liquid-crystal microlens for high-resolution three-dimensional reconstruction . 红外与激光工程, 2020, 49(8): 20190540-1-20190540-9. doi: 10.3788/IRLA20190540
    [14] Liu Lu, Xi Dongdong, Cheng Lei, Wang Yuwei, Cai Bolin, Zhou Huiyu.  Enhanced Gray-code method for three-dimensional shape measurement . 红外与激光工程, 2020, 49(11): 20200314-1-20200314-8. doi: 10.3788/IRLA20200314
    [15] Chengmingyue Li.  Optical phase conjugation (OPC) for focusing light through/inside biological tissue . 红外与激光工程, 2019, 48(7): 702001-0702001(9). doi: 10.3788/IRLA201948.0702001
    [16] Ji Yi.  Visible light optical coherence tomography in biomedical imaging . 红外与激光工程, 2019, 48(9): 902001-0902001(9). doi: 10.3788/IRLA201948.0902001
    [17] 丁超, 唐力伟, 曹立军, 邵新杰, 邓士杰.  深孔内表面结构光三维重构 . 红外与激光工程, 2018, 47(11): 1117004-1117004(7). doi: 10.3788/IRLA201847.1117004
    [18] 周林, 杨甬英, 闫凯, 曹频, 李晨, 吴凡.  玻璃面板油墨厚度无损在线数字化检测系统 . 红外与激光工程, 2017, 46(12): 1217009-1217009(7). doi: 10.3788/IRLA201746.1217009
    [19] Osamu Matoba.  Reflection-type holographic disk-type memory using three-dimensional speckle-shift multiplexing . 红外与激光工程, 2016, 45(9): 935005-0935005(5). doi: 10.3788/IRLA201645.0935005
  • 加载中
计量
  • 文章访问数:  440
  • HTML全文浏览量:  118
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-03-01
  • 修回日期:  2020-04-19
  • 网络出版日期:  2020-07-01
  • 刊出日期:  2020-07-01

Fast measurement of human body posture based on three-dimensional optical information

doi: 10.3788/IRLA20200079
  • 中图分类号: TP391.41

摘要: 

汽车中人体姿态三维测量对汽车座椅设计的舒适性评价具有重要的意义。为了快速准确地获取车内人体三维数据,文中采用一种基于双目视觉的立体三维数据获取方法,该方法将结构光与标记点相结合,实现了人体三维点云快速重建以及三维姿态(距离、角度)自动快速测量。实验结果表明,该方法在距离2 m以上,测量范围1.5 m×2 m时,人体姿态测量精度可以达到0.03 mm,满足了汽车人体姿态高精度三维数据采集的需求。与传统的汽车人体姿态三维测量方法相比,文中所使用的三维自动测量方法不仅自动化程度高,而且具有测量精度高、速度快、鲁棒性强的优点。

English Abstract

    • The traditional human measurement methods mainly use soft ruler, altimeter, slider and other contacted measuring tools to measure the human body. Although these contact measurement methods can obtain more detailed human body data, many practical issues during measurement such as heterosexual contact, fatigue, outfitting and personal measurement skills can affect measurement accuracy or measurement experience.

      As a branch of modern image measurement technology, 3D anthropometry is a 3D measurement technology based on modern optics, including fusion electronics, computer graphics, information processing, and computer vision. Compared with traditional measurement technology, 3D anthropometric measurement can measure the dimension of multiple parts of the human body in a few seconds, with high precision. Among them, non-contact optical 3D measurement greatly shortens the measurement time and improves the measurement accuracy, provides more accurate data aided design for the subsequent processing of 3D body data, and is beneficial to data management and recording[1]. The widely used active 3D measurement methods mainly include Moire interferometry method, time-of-flight (TOF) method, laser triangulation method and structured light method. Moire interferometry[2]uses two equally spaced fringe gratings with small angular displacements to generate moiré fringes, and then obtains the phase information of the object, based on which the depth value can be calculated through the demodulation function. Although this method can acquire the depth information of an object in real time, it is not suitable for the 3D measurement of human body. The ToF[3]method calculates the depth information of an object by transmitting ultrasonic waves or light pulses of a certain frequency and measuring the time difference between the transmitted signal and received signal. The advantage of this method is that it is not affected by the gray level of the object or the characteristics of the object. However, the time-of-flight camera has a lower resolution and its cost is high. The Laser Triangulation[4] method acquires the depth information of an object by calculating the angular offset between the emitted laser beam and the received laser beam. The method can effectively avoid the influence of external light on the measurement result during the measurement process and its measurement accuracy is high. But its measurement speed is slow. The 3D non-contact measurement technology based on structured light has the advantages of fast measurement speed, large measurement range and high robustness[5,6]. Considering the measurement accuracy, measurement range and equipment cost, we use the structured light method as the 3D reconstruction method to obtain high-precision 3D data.

      Lu proposed[7] a novel human detection method using depth information taken by the Kinect for Xbox 360. This method could measure 3D face model in real time, but the accuracy was not satisfied. Andreas[8] compared 3D laser-based body scans (BS) with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. Schwarz-Müller proposed a newly developed ‘positioning aid’[9] that stabilized the posture during the scanning process and was invisible on scans which overall increased the precision of the software-assisted extraction of body dimensions.

      Car seat comfort is one of the most important indicators for car manufacturers and consumers, and has a relatively high impact on human health. Traditionally based on the 2D image skeleton extraction method, it is difficult to obtain the 3D spatial information of the human posture. The 3D measurement method based on optical motion capture technology has low measurement accuracy and is difficult to apply to sitting posture analysis in automobiles. Aiming at the existing measurement problems, this paper proposes an improved three wavelength phase shift profilometry (TWPSP) method for measuring human body 3D data[10-12]. The method combines the structured light with the marked points, and can quickly measure the positional angle data of the human body located in the automobile by positioning marks attached to the human body. According to this improved method, three kinds of fringe pattern with proper frequencies is projected on the target object. The wrapped phase can be calculated directly by these frequencies, without calculating the equivalent frequencies. It not only enhances the stability of the calculation process, but also avoid error propagation, and improves measurement accuracy. In addition, the method has good robustness, can achieve better measurement results in complex environments, and can overcome the weak reflectivity of black objects in laser scanning.

      The remainder of this paper is organized as follows. Improved three frequencies phase shift profilometry method are explained in the Sec. 1. In Sec. 2, the preprocessing method of the human point cloud data, pasting position of the marking points, and the explanation of key angles are introduced. Sec.3 provides experimental results and analyses, and Sec. 4 concludes the paper.

    • The advantage of the phase shift method is that it can realize fast three-dimensional measurement, wide application scenarios, and high robustness. Its disadvantage is that it can not obtain better measurement results for metallic objects with dark ambient light or reflective properties. The human body pose data collected in this paper does not exist in the above two cases, so the phase shift method is suitable. The main principle of 3D reconstruction based on phase-shifted structured light is to project fringe images of different frequencies onto the surface of the target object. This kind of structured light is different from the normal gray code mode, and it has a certain sinusoidal property, which can make up for the defects of the gray code phase resolution mode. For the traditional three-step phase shift method[13], there is a large number of arctangent operations in the phase expansion process, which results in higher phase expansion costs. The four-step phase shift method[14]can not effectively suppress the influence of uniform harmonics, and it is not sensitive to background brightness. This method can accelerate the reconstruction and improve the reconstruction accuracy. In order to obtain high-precision 3D data to meet the subsequent data processing, this paper adopts the six-step phase shift method, which can suppress the influence of the nonlinear response of the camera, thereby is helpful for the measurement accuracy.

      Assuming that modulated images acquired by the camera are denoted by ${I_k}$. The sinusoidal fringe light with $i$ frequencies and ${\phi _i}(x,y)(i = 1,2,3)$ is wrapped phase, the six-step phase-shift method adopted in this paper can be described as

      $$\begin{split} & {I_k}({{x}},y) = A({{x}},y) + B(x,y)\cos \left( {{\phi _i}(x,y) + \frac{{{\text{π}} \times k}}{3}} \right), \\ & k = 0,1...5 \end{split} $$ (1)

      where $A(x,y)$ is the background light intensity, $B(x,y)$ is the modulation light intensity, and ${\phi _i}(x,y)$ can be calculated as

      $$\begin{split} & {\phi _i}(x,y) = \arctan \left( {\frac{{{I_5}(x,y) - {I_3}(x,y)}}{{({I_1}(x,y) + {I_4}(x,y)) - ({I_3}(x,y) + {I_5}(x,y))}}} \right),\\ &i = 1,2,3\\[-10pt]\end{split}$$ (2)

      where ${\phi _i}(x,y)$ belongs to $\left[ { - {\text{π}} , + {\text{π}} } \right]$ and has a discontinuity of $2{\text{π}} $.

      For the existing three frequencies phase shift profilometry method[15], it uses periods length of ${\lambda _1} = 21$ pixel, ${\lambda _2} = 18$ pixel and ${\lambda _3} = 16$ pixel as the modulation period of the projected fringes. Based on these three basic periods, longer equivalent periods ${\lambda _{12}} = 126$ pixel, ${\lambda _{23}} = 144$ pixel, ${\lambda _{123}} = 1\;008$ pixel can be calculated. Equivalent periods can be calculated as

      $${\lambda _{12}} = \left| {\frac{{{\lambda _1} \times {\lambda _2}}}{{{\lambda _1} - {\lambda _2}}}} \right|,\;{\lambda _{23}} = \left| {\frac{{{\lambda _2} \times {\lambda _3}}}{{{\lambda _2} - {\lambda _3}}}} \right|,\;{\lambda _{123}} = \left| {\frac{{{\lambda _{12}} \times {\lambda _{23}}}}{{{\lambda _{12}} - {\lambda _{23}}}}} \right|$$ (3)

      It should be noted that the final equivalent period needs to meet the conditions that its period is greater than the transverse resolution of the projector so that discontinuous conditions can be avoided. At the same time, the wrapped phase ${\phi _{12}}(i,j)$, ${\phi _{23}}(i,j)$ and ${\phi _{{\rm{1}}23}}(i,j)$ corresponding to the equivalent period can be calculated. Then, the corresponding unwrapped phase ${\varPhi _3}(i,j)$ corresponding to ${\lambda _3} = 16$ can be calculated as

      $$\begin{split}{\varPhi _3}(x,y) =\; & {\phi _3}(x,y) + 2{\text{π}} \Bigg({\rm{INT}}\left( {\frac{{{\phi _{123}}(x,y)}}{{2{\text{π}} }} \times \frac{{{\lambda _{123}}}}{{{\lambda _{23}}}}} \right) \times \Bigg.\\ & \Bigg.\frac{{{\lambda _{23}}}}{{{\lambda _3}}}{ + {\rm{INT}}\left( {\frac{{{\phi _{23}}(x,y)}}{{2{\text{π}} }} \times \frac{{{\lambda _{23}}}}{{{\lambda _3}}}} \right)} \Bigg)\end{split}$$ (4)

      It can be seen from the above derivation process that there is a cumulative error in the process of calculating wrapped phases of different periods, which ultimately leads to the absence of the generated 3D data. In order to avoid the above situation, we select ${\lambda '_1} = 1\;008$ pixel, ${\lambda '_2} = 144$ pixel and ${\lambda '_3} = 16$ pixel as the modulation periods of projected stripes and the corresponding wrapped phase ${\phi '_i}(x,y)$ can be calculated by Eq.(2), where ${\phi '_1}(x,y) = {\phi _{123}}(x,y)$, ${\phi '_2}(x,y) = {\phi _{23}}(x,y)$ and ${\phi '_3}(x,y) = $ ${\phi _3}(x,y) $. In this way, the unwrapped phase ${\varPhi '_3}(x,y)$ can be obtained without calculating the equivalent period. Usually, the phase can be calculated as

      $${\theta _G}(x,y) = {\theta _w}(x,y) + 2{\text{π}} *m(x,y)$$ (5)

      where ${\theta _G}(x,y)$ is absolute unwrapped phase, ${\theta _w}(x,y)$ is wrapped phase, $m(x,y)$ is an integer.

      For ${\lambda '_1} = 1\;008$ pixel periods, there exists ${\lambda '_1}/{\lambda '_2}$ discontinuities. As for ${\lambda '_2} = 144$ pixel periods, there exists ${\lambda '_2}/{\lambda '_3}$ discontinuities. In order to eliminate the discontinuities, the unwrapped phase ${\varPhi '_3}(x,y)$ is revised to

      $${\varPhi '_3}(x,y) = \left\{ \begin{aligned} & {{\phi '}_3}(x,y) + 2{\text{π}} \left( {{\rm{INT}}\left( {\frac{{{{\phi '}_1}(x,y)}}{{2{\text{π}} }} \times \frac{{{{\lambda '}_1}}}{{{{\lambda '}_2}}}} \right) \times \frac{{{{\lambda '}_2}}}{{{{\lambda '}_3}}}\left. { + {\rm{INT}}\left( {\frac{{{{\phi '}_2}(x,y)}}{{2{\text{π}} }} \times \frac{{{{\lambda '}_2}}}{{{{\lambda '}_3}}}} \right)} \right)} \right. \\ & {{\phi '}_1}(x,y) \ne 2{{\text{π}} _{}}\;and\;{{\phi '}_2}(x,y) \ne 2{{\text{π}} _{}}\;and\;{{\phi '}_3}(x,y) \ne 2{\text{π}} \\ & {{\phi '}_3}(x,y) + 2{\text{π}} \left( {{\rm{INT}}\left( {\frac{{{{\phi '}_1}(x,y)}}{{2{\text{π}} }} \times \frac{{{{\lambda '}_1}}}{{{{\lambda '}_2}}}} \right) \times \frac{{{{\lambda '}_2}}}{{{{\lambda '}_3}}}\left. { + {\rm{INT}}\left( {\frac{{{{\phi '}_2}(x,y)}}{{2{\text{π}} }} \times \frac{{{{\lambda '}_2}}}{{{\lambda _3}}}} \right){\rm{ - }}1} \right)} \right. \\ & {{\phi '}_1}(x,y) \ne 2{{\text{π}} _{}}\;and\;\left( {{{\phi '}_2}(x,y) = 2{{\text{π}} _{}}\left. \;or\;{{{\phi '}_3}(x,y) = 2{\text{π}} } \right)} \right. \\ & {{\phi '}_3}(x,y) + 2{\text{π}} \left( {{\rm{INT}}\left( {\frac{{{{\phi '}_1}(x,y)}}{{2{\text{π}} }} \times \frac{{{{\lambda '}_1}}}{{{{\lambda '}_2}}} - 1} \right) \times \frac{{{{\lambda '}_2}}}{{{{\lambda '}_3}}}\left. { + {\rm{INT}}\left( {\frac{{{{\phi '}_2}(x,y)}}{{2{\text{π}} }} \times \frac{{{{\lambda '}_2}}}{{{{\lambda '}_3}}}} \right)} \right)} \right. \\ & {{\phi '}_1}(x,y) = 2{{\text{π}} _{}}\;and\;{{\phi '}_2}(x,y) \ne 2{{\text{π}} _{}}\;and\;{{\phi '}_3}(x,y) \ne 2{\text{π}} \end{aligned} \right.$$ (6)

      The corresponding phase order is calculated from the corresponding relationship of ${\phi '_1}(x,y)$, ${\phi '_2}(x,y)$ and ${\phi '_3}(x,y)$. When ${\phi '_2}(x,y)$ or ${\phi '_3}(x,y)$ generates phase jump and ${\phi '_1}(x,y)$ not generates, we subtract $2{\text{π}} $ from the unwrapped phase to compensate the error. When ${\phi '_2}(x,y)$ and ${\phi '_3}(x,y)$ have no phase jump and ${\phi '_1}(x,y)$ generates phase jump, we subtract $2{\text{π}} {\lambda '_2}/{\lambda '_3}$ from the unwrapped phase to compensate the error. When the phase jump is not generated, no compensation is required, as shown in Eq.(6).

      The improved TWPSP method without calculating the synthesized phase, avoids the error generated during the phase combination, thereby reducing the error in understanding the parcel process, accelerating the process of understanding the phase, and improving the quality of the 3D point cloud data. Compared with traditional method, this method does not need to calculate the synthesized phase, and avoids error propagation in the calculation process. It can also effectively reduce the time of calculating the unwrapping phase and increase the speed of the entire 3D reconstruction process. It will be demonstrated that the method meets the measurement needs of this paper.

    • Since the human body point cloud data acquired by the 3D scanning is contaminated by environmental noise, preprocessing is required after the measurement, in order to reduce the error and improve the measurement accuracy. The steps of preprocessing involved in this paper include denoising, coordinate system correction, and data analysis.

      Before measurement, we need to paste some markers on the key positions of the human body, so that we can quickly extract and find the key coordinate points during data processing. The positions of the markers are shown in the Fig. 1(a).

      Figure 1.  Target data (a) distribution of the markers (b) angle calculation diagram

      Then we use the scanner to get human body point cloud data. But the scanning point cloud usually contains several kinds of defects or external noise. Therefore, it is necessary to preprocess the data. In the preprocessing stage, the octree[16] is a commonly used data structure. which is used to obtain the neighborhood information of a point. The octree can improve search efficiency, and save processing time as well. This paper uses octree to traverse the point cloud data obtained through scanning and delete invalid point cloud data.

      For each point, calculating its average distance to a specified number of adjacent points. If the average distance is outside the standard range, it can be classified as an outlier and removed from the data. The average distance can be calculated as

      $${{D}}= \left(\sum\limits_{i = 1}^k {\sqrt {{{(x - {x_i})}^2} + {{(y - {y_i})}^2} + {{(z - {z_i})}^2}} } \right)\frac{1}{k}$$ (7)

      where $k$ is the number of adjacent points, and $x$, $y$ and $z$ denote the coordinate of the current point of target point cloud.

      After performing point cloud filtering, it is also necessary to transform all point cloud data which is collected in different scenes into a unified coordinate system. Rodriguez's rotation equation is to calculate the rotation matrix between two vectors from the angle and the rotation axis[17]. This paper mainly uses the main element analysis method[18] to calculate the vector $v$ of the main direction of the target point cloud. The angle between the vector $v$ and the main direction vector $u$ of the standard coordinate system can be calculated as

      $$\theta = a\cos \left(\frac{{{v_1}{u_1} + {v_2}{u_2} + {v_3}{u_3}}}{{\sqrt {{v_1}{v_1} + {v_2}{v_2} + {v_3}{v_3}} + \sqrt {{u_1}{u_1} + {u_2}{u_2} + {u_3}{u_3}} }}\right)$$ (8)

      Where $\theta $ is rotation angle, $v$ is the main direction vector of the original point cloud, and $u$ is the main direction vector of the standard coordinate system.

      The revolution axis vector can be calculated as

      $$ \left( \begin{aligned} {r_1} \\ {r_2} \\ {r_3} \\ \end{aligned} \right) = \left( \begin{aligned} {v_2}{u_3} - {v_3}{u_2} \\ {v_3}{u_1} - {v_1}{u_3} \\ {v_1}{u_2} - {v_2}{u_1} \\ \end{aligned} \right) $$ (9)

      According to the rotation axis and the rotation angle, the rotation matrix ${R_{}}$ can be calculated as

      $$R = \left[ \begin{array}{ccccc} {r_1^2 + (1 - r_1^2)\cos \theta }&{{r_1}{r_2}(1 - \cos \theta ) + {r_3}\sin \theta }&{{r_1}{r_3}(1 - \cos \theta ) + {r_2}\sin \theta } \\ {{r_1}{r_2}(1 - \cos \theta ) - {r_3}\sin \theta }&{r_2^2 + (1 - r_2^2)\cos \theta }&{{r_2}{r_3}(1 - \cos \theta ) - {r_1}\sin \theta } \\ {{r_1}{r_3}(1 - \cos \theta ) - {r_2}\sin \theta }&{{r_2}{r_3}(1 - \cos \theta ) + {r_1}\sin \theta }&{r_3^2 + (1 - r_3^2)\cos \theta } \end{array} \right]$$ (10)

      where $r$ is revolution axis vector.

      In order to translate the target point cloud to the origin position of the standard coordinate system, it is necessary to calculate the center of gravity of the target point cloud. Point of gravity can be calculated as

      $$\left( \begin{aligned} {c_x} \\ {c_y} \\ {c_z} \end{aligned} \right) = \left( \begin{aligned} (\sum\limits_{i = 1}^n {{x_i})/n} \\ (\sum\limits_{i = 1}^n {{y_i})/n} \\ (\sum\limits_{i = 1}^n {{z_i})/n} \end{aligned} \right)$$ (11)

      where $n$ is the number of target point cloud, ${x_i}$, ${y_i}$ and ${z_i}$ is the point of target point cloud.

      The translation matrix $T({T_x},{T_y},{T_z})$ can be calculated as

      $$\left[ \begin{aligned} {T_{\rm{x}}} \\ {T_{\rm{y}}} \\ {T_{\rm{z}}} \end{aligned} \right] = \left[ \begin{aligned} - {c_x} \\ - {c_y} \\ - {c_z} \end{aligned} \right]$$ (12)

      After calculating the rotation matrix and the translation matrix, we can rotate and translate the target point cloud data to the standard coordinate system. The corrected coordinates of point cloud $P'$ can be transformed by

      $$\left[ {\begin{aligned} {P'} \\ 1 \end{aligned}} \right] = \left[ {\begin{array}{*{20}{c}} {{R_{}}}&T \\ {{0^T}}&1 \end{array}} \right]\left[ {\begin{aligned} P \\ 1 \end{aligned}} \right]$$ (13)

      where $P$ is the number of target point cloud.

      After completing the above steps, the next step is to calculate the angle among key points and complete the data analysis. The angle calculation diagram of the 3D human body inside car is shown in Fig. 1(b).

      In order to complete the calculation of the key angle, distance between two adjacent key points can be calculated as

      $$d(A,B) = \sqrt {{{({x_A} - {x_B})}^2} + {{({y_A} - {y_B})}^2} + {{({z_A} - {z_B})}^2}} $$ (14)

      where $A({x_A},{y_A},{z_A})$, $B({x_B},{y_B},{z_B})$ represent two key points of target point cloud.

      After getting the distance between the key points, the next step is to calculate the space vector of the adjacent key points. Space vector can be calculated as

      $$\begin{split} & \overline {AB} = ({B_x} - {A_x},{B_y} - {A_y},{B_z} - {A_z}) \\ & \overline {AC} = ({C_x} - {A_x},{C_y} - {A_y},{C_z} - {A_z}) \end{split} $$ (15)

      where $A({x_A},{y_A},{z_A})$, $B({x_B},{y_B},{z_B})$, $C({x_C},{y_C},{z_C})$ represent key points of target point cloud.

      $$\cos {\theta_s} = \frac{{AB\cdot AC}}{{|AB||AC|}}$$ (16)

      where ${\theta _s}$ is the key angle value of the human sitting posture.

    • A 3D measurement and reconstruction system for human body inside car is developed in our lab, including both software and hardware. The binocular 3D human body scanning system is shown in Fig. 2, where a projector with resolution of 1 024×768 is used to cast three kinds of sine stripes and the three sine photo periods are 1 008 pixel, 144 pixel, and 16 pixel, respectively. The industrial camera sends a 1 280×1 024 gray image to the computer at a frame rate of 4 frames/s. The industrial camera uses an 8 mm lens. As shown in Fig. 2, a person sits in a car and adjusts his seat to the most comfortable posture when driving, scans the human body in this posture to obtain the three-dimensional posture information of the human body. In order to ensure the integrity of the data, the scanning range is 1.5 m × 2 m, and the binocular 3D measurement system is more than 2 m away from the human body.

      Figure 2.  Binocular 3D human body scanning system

      To examine the performance of the improved TWPSP method. In the same distance and scanning ranges we use the traditional TWPSP method[15] and the method proposed in this paper to project the three kinds of calibrated balls with diameter of 20 mm. These three calibration balls are made of matte materials, which can suppress light reflection. The accuracy of the calibration sphere can reach 0.000 1 mm.

      In Fig.3(a), the calibration ball are placed in different areas for 3D scanning to obtain its corresponding 3D reconstruction data in different areas. In Fig. 3(b), the distance between the centers of the three balls is 25 mm, 50 mm and 55.90 mm. We use the measurement scheme shown in Fig. 3(a) to obtain the measurement accuracy of 3D scanning system and fit the calibration ball 3D point cloud data obtained by two methods.

      Figure 3.  Calibration balls (a) calibration balls used in system measurement accuracy (b) schematic diagram of the spacing of the calibration balls

      Through the measured 3D data, calibration Ball 1, calibration Ball 2 and calibration Ball 3 can be fitted respectively by Geomagic studio, and obtain the diameters ${\hat d_1}$, ${\hat d_{\rm{2}}}$ and ${\hat d_{\rm{3}}}$ of the three calibration balls, and the sphere centers ${\hat O_1}$, ${\hat O_{\rm{2}}}$ and ${\hat O_{\rm{3}}}$ of the three calibration balls.

      In the next moment, the distance between the sphere centers of three calibration balls ${\hat O_1}{\hat O_2}$, ${\hat O_{\rm{2}}}{\hat O_{\rm{3}}}$ and ${\hat O_1}{\hat O_{\rm{3}}}$ can be obtained by the coordinates of ${\hat O_1}$, ${\hat O_2}$ and ${\hat O_{\rm{3}}}$. We made ten measurements and calculated average distances and RMSE of ${\hat O_1}{\hat O_2}$, ${\hat O_{\rm{2}}}{\hat O_{\rm{3}}}$, ${\hat O_1}{\hat O_{\rm{3}}}$, and the first five measurements and calculations are shown in Tab. 1.

      Table 1.  First five measurements of calibration balls

      Key dimensionsTraditional TWPSP methodMethod in this paper
      ${\hat O_1}{\hat O_2}$/mm149.853 49.970
      250.05350.038
      349.97950.000
      449.95650.009
      550.014 50.024
      Average49.97149.998
      RMSE0.044 0.019
      ${\hat O_{\rm{2}}}{\hat O_{\rm{3}}}$/mm125.096 25.072
      224.975 25.019
      325.048 24.963
      424.981 25.051
      52525.07724.949
      Average25.06125.028
      RMSE0.0530.037
      ${\hat O_1}{\hat O_3}$/mm155.65455.924
      255.64355.934
      355.57255.910
      455.63255.881
      555.61055.875
      Average55.94055.895
      RMSE0.0550.035

      As can be seen from Tab. 1, the RMSE of the spherical distance of the three calibration balls measured by the traditional method is 0.044 mm, 0.053 mm and 0.055 mm. RMSE of the spherical distance of three calibration balls measured by the improved method proposed in this paper is 0.019 mm, 0.037 mm and 0.035 mm. By comparing the experimental results, it can be seen that the measurement results obtained by the improved method have higher stability.

      To further validate the effectiveness of the improved method proposed in this paper, three standard balls were used for ball calibration experiments. The experimental results are shown in Tab. 2. As can be seen from Tab. 2, the mean absolute error (MAE) of each ball is solved respectively and ${\hat d_1}$, ${\hat d_{\rm{2}}}$ and ${\hat d_{\rm{3}}}$ is the average of ten measurements. The mean of the MAE of the three calibration balls is taken as the average measurement accuracy of the system. The average measurement accuracy of the system using the improved method is 0.021 mm. Compared with the measurement accuracy calculated by the traditional method, the improved method has higher measurement accuracy, so in the actual measurement, the improved method can be used to obtain higher precision data.

      Table 2.  Data comparison table of fitting results

      ${\hat d_1}$ /mm${\hat d_{\rm{2}}}$ /mm${\hat d_{\rm{3}}}$ /mm$MA{E_{\rm{1}}}$ /mm$MA{E_{\rm{2}}}$ /mm$MA{E_{\rm{3}}}$ /mm
      Traditional TWPSP method19.94919.96119.9780.0480.0350.039
      Method in this paper19.97919.97819.9880.0280.0180.019

      The parts within the red rectangles in Fig. 4(c) and Fig. 4(d) show the difference between the two methods. The 3D point cloud data obtained by the traditional method has a case of mismatching. In contrast, the point cloud data obtained through the improved method is more dense and the data is more accurate.

      Figure 4.  Method comparison (a) traditional TWPSP body scanning schematic (b) improved TWPSP body scanning schematic (c) fitting result of traditional method (d) result of the method in this paper (e) traditional TWPSP method reconstruction chart (f) improved TWPSP method reconstruction chart

      Next, the paper further compares the reconstruction effects of the two reconstruction methods in real anthropometric scenarios. The measurement distance is more than 2 m, and the measurement range is 1.5 m × 2 m. Different ways of rebuilding flowchart are shown in the Fig. 4(a) and Fig. 4(b).

      Experiments show that point cloud data obtained by the improved phase unwrapping method is more compact, and the details of the point cloud data are more perfect, and no details are missing. The reconstruction effect of the two methods is compared as shown in Fig. 4(e) and Fig.4(f).

      To further evaluate the performance of the scanning system using the method presented herein. In this paper, we use the commonly used body scanner to collect data in the case of the same person and vehicle. The Artec Eva 3D scanner does not require marking or calibrating objects. It can capture the shape of the object with high resolution and restore the bright colors of the object. The Sense scanner is a laser scanner that has a short scan time and is capable of quickly generating 3D information of the target. This paper collects human point cloud data using several different scanners and compares the number of generated point clouds, accuracy, and reconstruction time. The comparison results are shown in Tab.3.

      Table 3.  Point cloud data analysis of different sensors

      EquipmentPoint cloud numberPoint cloud densityAccuracy/mmReconstruction time/s
      Artec scanner795 7850.1430.17
      Sense scanner121 7170.0010.95
      Traditional TWPSP method system236 5400.0160.0552
      Scanning system of this paper296 5400.0160.031

      As seen from Tab.3, the number of point clouds generated by Artec scanners is the highest, but Artec's accuracy is only 0.1 mm, and its reconstruction time takes 7 s. Although the reconstruction time of the sense sensor is 2 s lower than Artec, it generates sparse point clouds with an accuracy of only 0.9 mm, which is insufficient for measurement needs. The point cloud data collected by the method proposed in this paper not only has an accuracy of 0.03 mm, but also requires only 1 s for reconstruction time. Through the above comparison, it can be concluded that the improved TWPSP method proposed in this paper has a fast reconstruction speed and a high point cloud density, which is suitable for fast 3D anthropometry. It can be demonstrated that the improved optical profiler-based optical 3D body scanner has better accuracy and point cloud quality.

      We used the method proposed in this paper to collect 20 sets of human body 3D data on three different models of car. For each set of data for each car, the average of 9 key angles were calculated. We also calculated the average of the angles of each type and RMSE. The calculation results are shown in Tab.4. It can be seen from the Fig.5 that the angle pose data of the 20 experimental objects on different cars are relatively close, and the fluctuation range is small. The largest RMSE of the 3D measurement method for human body inside car proposed in this paper in different cars is 8.661 mm.

      Table 4.  Angles of key points (all values are reported as degree)

      Average degree
      of No.1 car
      Average degree
      of No.2 car
      Average degree
      of No.3 car
      RMSE of
      No.1 car
      RMSE of
      No.2 car
      RMSE of
      No.3 car
      Angle A125.102 26.41023.6051.2911.4541.249
      Angle A298.491104.70 598.3705.2305.7755.081
      Angle A360.80664.31365.6543.4263.8043.117
      Angle A4101.50193.416101.3285.9665.3205.313
      Angle A535.54330.67735.3853.9263.1653.381
      Angle A6127.434133.667138.0952.1172.0842.879
      Angle A7154.080141.380144.4008.3448.6618.250
      Angle A841.64732.87543.1926.7756.6256.291
      Angle A913.9359.66114.3993.8643.2973.958

      Figure 5.  Comparison of the average values of the three groups of poses

      As shown in Tab. 4, the maximum value of RMSE in the measurement results is 8.661 mm, and the minimum value of RMSE is 1.249 mm. The experimental results show that the improved TWPSP method can realize fast human body 3D measurement to obtain more complete point cloud data. In addition, the key point angle data calculated by the points measured by this method is highly robust.

    • This paper proposed a fast 3D measurement method for 3D measurement of human posture. Compared with traditional three-frequencies 3D measurement method, the improved method proposed in this paper has higher measurement accuracy and reconstruction speed. When the distance is more than 2 m and the measuring range is 1.5 m × 2 m, the reconstruction accuracy can reach 0.03 mm, and the reconstruction time only takes 1 s. In addition, compared with the existing handheld 3D scanning equipment, the 3D scanning system constructed by the improved method has higher point cloud quality, faster reconstruction speed and moderate point cloud density. At the same time, the key angle data obtained by this method has small fluctuations and high stability of repeated measurements, which meets the needs of 3D measurement of human posture. Our future work includes automatic and rapid extraction of human body landmarks, combining 3D measurements with artificial neural networks.

WeChat 关注分享

返回顶部

目录

    /

    返回文章
    返回