-
散斑结构光三维传感技术的基本原理非常简单,即基于人类的双眼立体视觉原理。在自然界中,同样的景物在不同动物的眼里是有所差异的。大多数食草类哺乳动物,如牛、马、羊等,它们的双眼长在头的两侧,因此具有更广阔的视野(为了更全面地搜索附近是否有危险物种)[113]。但它们的双眼视野完全不重叠,左眼和右眼各自感受不同侧面的光刺激,因此这些动物仅有单目视觉而不具有立体视觉。而人和灵长类动物不同,他们的双眼都在头部的前方,双眼的鼻侧视野相互重叠,因此落在此范围内的任何物体都能同时被双眼所见形成立体视觉(为了更精准地捕获猎物)。当两只眼睛分别形成的物体被转化成神经信号传输到大脑以后,大脑就会对它们进行综合加工处理,主观上可产生被视物体的厚度以及空间的深度或距离等感觉。
有些读者也许会问:为什么必须用两只眼睛才能得到深度?闭上一只眼只用一只眼来观察,也能知道哪个物体离我们近哪个离我们远啊!是不是说明单目相机也可以获得深度?的确,人通过一只眼也可以获得一定的深度信息,不过这背后其实有一些容易忽略的因素在起作用:首先,人本身对自己所处的世界是非常了解的(先验),对日常物品的大小尺寸也有基本预判,因此根据近大远小的常识确实可以推断出图像中物体离我们的远近;其次,人在单眼观察物体的时候其实人眼往往是在不停扫视的,这相当于一个移动的单目相机,这类似于运动恢复结构(Structure from Motion, SfM)的原理[114–116],移动的单目相机通过比较多帧差异确实可以得到深度信息。但相机不是人眼,它只会拍照,不会学习和思考。
图17从原理上展示了单目相机不能感知深度而双目可以的原因。如图所示,红色线条上三个不同远近的绿色的点P、Q、R在下方相机C1上的投影在图像的同一个位置(P1=Q1=R1),因此单目相机无法分辨成的像是远的那个点还是近的那个点,但是它们在一旁的相机C2的投影却位于图像中三个不同位置P2、Q2、R2,因此通过两个相机的观察可以唯一确定到底是空间中的哪一个点。
了解了双目立体视觉的原理,可能还有人会问,那为什么还要再投影散斑结构光呢?很简单,前文中已经提及,想要得到深度信息,必须两只眼睛从两个不同的视角观察到同一个目标点。人眼是个非常神奇的光学系统,它具有“无特征聚焦”的能力,就比如一面白墙,仍然可以对墙上没有任何特征的任意位置聚焦成像。而相机却远没有那么智能了。假设所要测量的物体是一个白色墙面,用两个相机从不同的角度拍摄了如图18所示的两幅图片,计算机是无法判断左图中物体上的A点对应于右图中墙面上的哪个点的。
有了散斑后情况就完全不一样了。由于散斑图案的随机性能够保证局部特征的唯一性,它为场景中的任何一个空间点都打上了一个唯一的标记,如图19所示。通过对左右相机的散斑图像进行局部匹配,就能够建立左右相机的像素一一对应关系,从而可以计算得到物体的视差分布。最终根据视差分布与相机的几何标定参数就能够计算得到物体每个像素的位置和深度信息,进而复原整个三维空间。
-
通过3.1节了解到典型散斑结构光3D传感器首先由红外激光投射器(IR projector)投射出近红外波段的激光散斑,经过物体(如人手或人脸等)反射发生形变后的图案被红外图像传感器接收,最终由算法基于拍摄的图片计算出目标物所处的深度位置。因此,散斑结构光3D传感器至少需要具备散斑投射与接收装置。在基于人机交互的终端产品上,3D传感器还应考虑人体安全和功耗控制等因素,如具备自动开启、人体安全距离感应等功能。图20给出了一个终端红外散斑结构光3D传感器的内部结构实物图,其主要器件从左至右依次为红外散斑投射器(IR dot projector)、距离探测器(Distance detection)、泛光灯(Flood LED)、彩色相机(Color camera),红外相机(IR camera)。相比较技术已经非常成熟的红外/可见光图像传感器,红外散斑投射器是随着散斑结构光3D视觉的发展而出现的新型结构光投射器件[117]。这些器件与后端的3D算法处理器件一起组成了具有3D传感功能的完整设备。
-
红外光投射部分是3D传感器实现3D测量的发起端,负责提供核心的近红外光源,其投射的结构光图像(通常为散斑)的质量对测量结果至关重要。结构光投影器件也是结构光3D测量技术不同于传统被动式3D测量技术的关键所在。正是有了结构光投影器件,使得传统的基于机器视觉的三维测量方法在非接触测量的特性之上,额外增加了全视场、高精度等优点。在数字投影设备出现之前,特殊的编码图案只能通过使用光源照射加工的透明掩膜实现,例如使用二值罗奇光栅实现近似正弦分布的条纹图案投影[119]。这类投影方式限制了结构光投影图案的灵活性。随着数字投影技术的发展,数字投影仪可以将投影图案以图片的形式发送并投射出来。然而数字投影仪成本高昂,体积庞大,并不适于小型化集成。
在消费终端设备上,可实现的小型化的红外投射器件多基于半导体激光器开发。与红外发光二极管(LED)相比,红外半导体激光器(Semiconductor laser)具有光发散角小,光能转换效率高,数据传输快等优点[120]。图21给出了三种常见的半导体激光器结构示意图,根据使用的半导体激光器的种类不同,可将红外散斑投射器分为基于垂直共振腔面射型激光器(Vertical-Cavity Surface-Emitting Laser, VCSEL)的投影器件(如图21(a)所示)和基于边发射激光器(Edge Emitting Laser, EEL)的投影器件[121]。其中EEL主要分为分布式反馈型发射激光器(Distributed Feedback EEL, DFB EEL)(如图21(b)所示)和法布里-珀罗型发射激光器(Fabry–Pérot EEL)(如图21(c)所示)两种。将红外发光二极管、红外VCSEL、红外EEL三种类型器件的出光图案进行对比。如图22所示,由于EEL的出射光从器件的侧面出射,光束通常呈椭圆形分布,因此相比之下其光束的均匀性不如VCSEL的出光图案。而LED的发散角较大,因此其出光的单向性不满足结构光的要求。VCSEL是目前最常用的一种激光器类型。红外散斑投射器中除了激光光源外,还包括准直镜和光学衍射器件(Diffractive Optical Element, DOE)。基于EEL的和基于VCSEL的两种投射器中,除了激光光源的种类不同以外,实现散斑投射的方式也是不一样的。EEL投射器的图案由DOE上刻蚀的图案决定,而EEL本身只提供相干光源。VCSEL投射器的图案则是在VCSEL激光器的表面加工形成。DOE的作用是将VCSEL上有限个数的散斑点进行复制并扩散。下面分别介绍基于EEL的和基于VCSEL的两种投射器。
图 21 三种半导体激光器模型(a)VCSEL;(b)分布式反馈型发射激光器;(c)法布里-珀罗型发射激光器
Figure 21. Three semiconductor lasers models (a) VCSEL; (b) DFB EEL; (c) Fabry–Pérot EEL
EEL投射器的激光具有较好的时间相干性与空间相干性。基于光学衍射理论,可以通过DOE的相位调制实现远场光场的强度调制[122-123],如图23(a)所示。
图 23 (a) DOE衍射示意图;(b) EEL散斑投射器示意图
Figure 23. (a) Schematic diagram of DOE diffraction; (b) Schematic diagram of an EEL dot projector
一个EEL投射器的模型如图23(b)所示,EEL的出射光先经过光束整形器扩束,从而使激光束的横截面积可以覆盖后面的衍射元件,再经过准直元件将扩束之后的激光重新调成平行光,随后经过DOE在特定的距离形成所需的光学图案。但由于EEL是侧发光器件,因此在尺寸敏感的产品设计中使用EEL投射器需要使用矫正光路将光线出射方向以及形状进行调整。通常该矫正光路也同时具有准直光路的功能。EEL投影器件的设计难点或者成本主要在于DOE的设计。受制于微加工技术的精度与成本,DOE通常设计成台阶形式,无法实现连续相位的调制。DOE图案的刻蚀阶数越多,衍射图案质量越高,但设计成本也随之增高。另外,由于EEL侧发光的特性导致其性能测试无法在晶圆阶段完成,必须进行切割之后才能检测其质量,故其生产成本相对VCSEL较高。目前EEL投射器在移动端产品或者设备上已经难觅踪影。
VCSEL与EEL所不同的是其激光垂直于顶面射出,且具有极好的均匀性[124]。通过在VCSEL顶层特定位置开孔,可以控制激光出射处的光强分布[125-126]。基于此特性可以将所需散斑图的亮点分布刻蚀在VCSEL的顶层。由于散斑点的密度很大程度上影响3D测量精度,开孔的数量应尽可能多且密。实际上,VCSEL上开孔的尺寸同时受到两方面的制约:一方面,开孔要足够大以保证通光量,提升散斑亮度。另一方面,开口过大将导致开孔平面无法实现稳定的谐振。这样看来,似乎只能增大VCSEL的面积以实现更多的散斑点。然而VCSEL的成本颇受VCSEL的尺寸所影响,增大VCSEL的面积的方式在实际设计生产中并不符合成本要求。
VCSEL投射器获得更多的散斑点其实可以使用具备光束复制功能的DOE来实现。如图24(a)所示的是DOE的光束复制示意图,需要的复制数量可以通过特定的DOE结构实现。VCSEL投射器模型如图24(b)所示。VCSEL顶层开孔处的出射光被准直后照射到DOE上,随后DOE衍射出与入射光完全相同的,照射角度分散的若干个子光线,最终实现散斑点数量的增加。这一方式无疑大大降低了整个投影器件的设计成本。此外,VCSEL和红黄光LED同属于GaAs材料体系,使得厂家可以大幅缩短开发周期。现在移动设备中3D传感的爆发式发展也引发了VCSEL的空前需求。VCSEL被迅速推广和采用的另外一个原因在于每个投射器(或投射器阵列)都可以在晶圆划片之前进行制造和测试。这是VCSEL相比EEL的另一大优势,因为它极大地简化了后端组装测试流程,具有大批量生产的优势[127]。
回到小型化加工这一特定需求,相应的投射器中所需的镜头也需要小型化。因为不论哪种投影器件的激光发出的红外光都需要经过准直镜头,准直镜头利用光的折射原理,将波瓣较宽的衍射图案校准汇聚为窄波瓣的近似平行光。准直镜头可以采用晶元级镜头(Wafer Level Optics, WLO)。所谓WLO是指晶元级镜头制造技术和工艺,用半导体工艺批量复制加工镜头,多个镜头晶圆压合在一起,然后切割成单颗镜头,如图25所示。其具有尺寸小、高度低、一致性好等特点,光学透镜间的位置精度达到nm级,是未来标准化的光学透镜组合的最佳选择。根据传统光学镜头和WLO的性能对比,WLO成本更低、生产效率更高、镜头一致性更好,更适合用于制造准直镜头[128]。
-
在结构光3D传感中,红外摄像头模组用于接收被物体反射的红外光。红外传感器目前主要基于红外CMOS器件,其使用与可见光传感器十分类似。目前结构光3D传感通常使用的红外传感器分辨率在100万像素至200像素之间,相机帧率在30帧每秒,可满足实时测量的条件。实际应用中,红外相机模组的分辨率与最终输出的深度图的分辨率之间存在一定的倍数关系,3D深度图的数据量通常小于原始红外图像的分辨率,一方面原因是由于计算压力较大,高分辨率的3D实时处理需要消耗较多的硬件资源;另一方面是目前3D数据在应用端只当作活体检测的参考,因此其分辨率在原始图像分辨率的基础上进一步压缩也可满足需求。
摄像头所配备光学镜头相比较其他模组也是一个相对比较成熟的部件,其主要参数包含光圈、焦距、景深、视场角等。其需要配合红外窄带滤光片将光线限制在较窄的范围内以排除其他波段,如可见光的串扰。其次是相机镜头的发散角应该与投射器保持一致以保证散斑图充满相机视场的同时,能够在任何深度位置都能拍摄到相同的散斑密度。总体而言,接收端除窄带滤波片较特殊,制造难度较高外,红外传感器和镜头都是较成熟的器件。
-
结构光3D传感器的辅助器件包括彩色相机、距离探测器、泛光灯等。彩色相机的作用主要包括两个方面,一是在消费端作为人脸识别的数据源,二是对输出的3D结果进行色彩渲染。在一些涉及人机交互的3D传感应用中,例如人脸探测与识别,需要使用距离传感器件实现有人靠近时打开设备的功能,这与手机识别人脸靠近从而自动关闭屏幕显示是一个原理。当有人靠近时,距离探测器投射的红外光被人脸反射后被接收器接收。如果反射光强超过了预设的阈值,则开启3D传感功能。传感器上的辅助器件本身不对3D数据产生影响,但可丰富3D传感器的功能,提升使用安全性,降低设备功耗。
-
如上文所述,散斑结构光三维传感器在实现深度测量时有如下三方面的核心步骤:首先,需要设计较为理想的散斑图案向待测场景中投射;然后,对拍摄到的两张散斑图像进行逐像素点匹配;最终,根据匹配结果与标定相机所得到的几何参数计算每个像素的深度,从而获得深度图。下文中详细介绍了这三个方面所涉及到的核心算法:散斑图案设计,散斑相关(视差估计)与三维重建。
-
对于基于散斑投影的三维重建技术,散斑图案的设计是至关重要的。散斑图案的质量会影响后续的散斑相关法的稳定性及三维重构的点云密度及空间分辨率。散斑图案的设计方法可以根据编码域分为:空间编码(单次拍摄)和时间编码(多次拍摄)的方案。市面上大多数流行的3D结构光传感器所采用的散斑图案设计方法都属于空间编码,因为基于空间编码的散斑图案具有天生的全局唯一性,从而使基于散斑投影的三维重建技术具有单幅重建的优势。因此,基于空间编码的散斑图案设计方法的关键性思想是如何保证局部编码在全局图像中的唯一性。空间编码方法有非正规码[42-43],M-array码[44]和De Bruijn编码[39–41]。非正规编码是最早的一类空间编码方法,该方法通常使用繁琐的蛮力搜索算法根据不同的约束以保证图案的唯一性从而生成投影图案。图26显示了根据两种不同的约束生成的投影图。如图26(a)所示,投影图案由白色随机点和一系列长度不同的黑色狭缝组成,其中随机点在每个狭缝上随机分布的切口处形成用于识别每个黑色狭缝。因此,每个狭缝被分成许多小的线段,在相机捕获的图像和待投影的图案之间执行线段匹配即可获得3D数据[43]。如图26(b)所示,投影图案由三种具有不同亮度的像素组成,包括255级灰度,0级灰度和127级灰度。使用不同数量且不同亮度的像素可以产生多种编码序列。例如,一个由72像素组成的序列BWG,WBG,WGB,GWB,GBW,BGW,其中,B、G和W分别由四个0级、127级和255级灰度的像素组成,以便可以容易地依据相关性实现图像匹配,继而重复此72个像素的序列即可创建投影图案[129]。由于没有使用任何正式、标准的编码理论来定义编码图案中每个邻域所表示的码字,因此根据不同的约束,这些编码方法所生成的图案中每个邻域的视觉特征上也是不同的。此外,因缺乏合理的编码原理,基于这种方法所生成的图案可能出现重复的邻域,即不能确保码字的唯一性,因此该类方法缺乏鲁棒性,无法获得理论上的最佳投影图案。
为了解决这一问题,一些研究人员提出基于De Bruijn序列的空间邻域编码方法。De Bruijn序列可表示为
$B(k,n)$ ,是由$k$ 个长度为$n$ 的子序列构成的循环序列,其中每个长度为$n$ 的子序列仅在循环序列中出现一次,如图27所示。基于这种一维的编码规则,许多基于De Bruijn序列的编码方法已经被提出并用来生成具有伪随机特性的投影图案,如多缝(由黑间隙分隔的窄条)[130],多条带(相邻条)[131]和网格图案[132]等。此外,由于投影图案是二维的,因此不限于一维的De Bruijn序列编码规则,二维的De Bruijn序列编码规则也能被用于投影图案设计,即M-array编码,如图28所示。待投影的图案可表示为
$M$ ,其为尺寸为$u \times v$ 的矩阵,其中每个元素均由$m \times n$ 个码字随机组成。为了使$M$ 矩阵具有局部唯一性,即应使每个大小为$m \times n$ 的不同元素恰好仅在$M$ 矩阵中出现一次,则$M$ 矩阵是理想的伪随机图案。基于这些优点,这类基于M-array编码的伪随机图案已被广泛用于基于散斑投影的三维重建技术。接下来,文中针对市面上散斑结构光传感器产品的投影图案进行分析。图29给出了三种市面上散斑结构光传感器所投射出的散斑图案。从这些散斑图案中可以发现一些共同点,如散斑图案的整体亮度较为均匀,具有较好的全局唯一性,每个散斑点之间的间隔有一定的规则。此外,如图29(a)中所示,该散斑图中没有明显的重复子图案,因此该散斑图是基于全局唯一性的编码方式生成的,图案编码的设计规则较为复杂。如图29(b)中所示,该散斑图中由类似九宫格状的子散斑图组成,九宫格中每个格子的子散斑图是相同的,因此该散斑图仅需保证子散斑图具有唯一性。如图29(c)中所示,该散斑图呈明显的重复块状子图案交错分布,每个块状子图案中的散斑图是唯一的,但是由于块状子散斑图呈交错形式排列,因此基于类似M-array编码的方式仍能保证整体散斑图的全局唯一性。此外在M-array编码的基础上,还应根据实际的系统结构来设计散斑图案,如基线距离、相机焦距和工作距离等系统参数可以共同确定测量系统的视差约束,这些编码规则都将促进生成高质量的散斑图案。
-
常见的散斑结构光传感器具有两种系统结构:单相机(单目,即只有一个红外摄像头)与双相机(双目,即有两个红外摄像头)。对于双目结构的基本原理之前已经进行了详细讨论。由于可以同时拍摄物体获取左右原始散斑图,因此可以直接对两张散斑图像进行逐像素点匹配来获得它们之间的视差图。上述过程中所涉及到的核心算法就是基于局部窗口的图像相关技术。如图30所示,对于左图中的一个像素点(左图中红色方框中心),在右图中从左到右用一个同尺寸局部窗口内的像素和它计算相似程度,相似度的度量有很多种方法,常用的相关函数有零均值归一化互相关函数(ZNCC)、零均值归一化差平方和函数(ZNSSD)等,它们的计算公式见表1和表2[133]。图30中下方的ZNCC曲线显示了相关计算结果,相关度最大的位置对应的像素点就是最佳的匹配结果。
值得注意的是,基于局部窗口的图像相关技术的运算过程是比较复杂耗时的,因为里面涉及到元素的多次相乘与累加。如果对于每个图像点的搜索都要完全遍历另一幅图像整个二维视场,那么算法的整体计算量更是难以想象。通常解决这一问题的方式是让两个相机尽可能地水平放置(比如在同一平面上左右放置),使得两个视图的散斑图像处于同一水平线上,从而直接采用一维行搜索即可,不仅大幅缩减了匹配算法的搜索区间,还降低了误匹配的概率。当然由于制造工艺、加工装调精度所限,两个相机之间可能难以保持完全水平。因此,通常还可以通过对双目立体视觉系统进行标定,利用标定数据对采集到的散斑图像进行极线校正,使得两个视图的散斑图像处于同一极线上,从而也可使二维搜索简化为一维搜索,如图31所示。假设已知正确的对应参考点,则参考点与待匹配点之间的关系为:
$$({x_r} + d,{y_r}) = ({x_t},{y_t})$$ (1) 式中:
$d$ 代表参考点与待匹配点之间的视差值。表 1 基于互相关准则的匹配函数
Table 1. Matching function based on cross-correlation criteria
CC correlation criterion Definition Cross-correlation (CC) ${C_{CC}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {[f({x_i},{y_i})g(x_i',y_j')]} } $ Normalized cross-correlation (NCC) ${C_{NCC}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {[\dfrac{{f({x_i},{y_i})g(x_i',y_j')}}{{\bar f\bar g}}]} } $ Zero-normalized cross-correlation (ZNCC) ${C_{ZNCC}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {\left\{ {\dfrac{{\left[ {f({x_i},{y_i}) - {f_m}} \right] \times \left[ {g(x_i',y_j') - {g_m}} \right]}}{{\Delta f\Delta g}}} \right\}} } $ 表 2 基于SSD相关准则的匹配函数
Table 2. Matching function based on SSD-correlation criteria
SSD correlation criterion Definition Sum of squared differences (SSD) ${C_{SSD}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {{{[f({x_i},{y_i}) - g(x_i',y_j')]}^2}} } $ Normalized sum of squared differences (NSSD) ${C_{NSSD}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {{{\left[ {\dfrac{{f({x_i},{y_i})}}{{\bar f}} - \dfrac{{g(x_i',y_j')}}{{\bar g}}} \right]}^2}} } $ Zero-normalized sum of squared differences (ZNSSD) ${C_{ZNSSD}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {{{\left[ {\dfrac{{f({x_i},{y_i}) - {f_m}}}{{\Delta f}} - \dfrac{{g(x_i',y_j') - {g_m}}}{{\Delta g}}} \right]}^2}} } $ 对于单目相机系统,理解起来可能要稍微麻烦一点。仅能获得单一视角的散斑图,那么该如何进行双目匹配呢?根据光路可逆原理,可以将散斑投射器看作是一个“逆相机”,设计好的投影散斑图就是这个逆相机所能拍摄到的图像,而物体其实可以理解为表面被喷涂上了扭曲的散斑。如果把散斑投射器换成光学系统完全一致的相机,它所拍摄到的恰好是那幅设计好的没有扭曲的投影散斑图。所以只需要将投影仪当作逆向的相机来进行标定,然后将单幅测量得到的散斑图与所设计的投影散斑图相匹配即可。但这种方式存在两方面问题:(1) 一般考虑到成本、体积等因素,结构光3D传感器里的散斑投射器并不算是一个完整的数字投影仪,因此散斑投射器的成像模型很难以参数化的形式表示,从而对其标定起来往往比较困难,难以获得散斑投射器的标准标定参数。由于这些原因,因此散斑投射器一般很难完美地等效于一个“逆相机”,即单目相机系统不能近似为双目相机系统,所以不能利用双目相机系统的原理计算得到被测场景的深度信息。(2) 由于光学系统离焦、物体表面反射率等影响,设计的散斑图与实际拍摄的散斑图往往存在一定差异,比如分辨率,畸变,像素尺寸等等。这些因素都会导致十分糟糕的匹配结果,从而影响到最终的深度测量结果。因此,另一种可行的方案是预先在所测量的深度范围内沿着z轴平移以获取一系列参考散斑图像,然后将相机所获取的散斑图像与这些已知距离的标准平板散斑图像之间进行直接匹配,如图32所示。在这种情况下,在恒定且已知的距离
${Z_{\rm{Ref}}}$ 所获取的散斑图像称为参考图像,因此可以按以下公式计算参考图的参考视差值${d_{\rm{Ref}}}$ :$${d_{\rm{Ref}}} = \frac{{bf}}{{{Z_{\rm{Ref}}}}}$$ (2) 图 31 双目立体视觉系统中一维匹配的原理与流程。(a) 基本原理;(b) 匹配流程
Figure 31. Principle and process of one-dimensional matching in binocular stereo vision system. (a) Basic principle; (b) Matching process
然后,同样地对于相机捕获的目标图像中每一个有效的匹配点
$p(x,y)$ ,通过相关函数可以在参考图像中找到相关值最大的对应参考点${p_{\rm{Ref}}}({x_{\rm{Ref}}},{y_{\rm{Ref}}})$ ,$p(x,y)$ 与${p_{\rm{Ref}}}({x_{\rm{Ref}}},{x_{\rm{Ref}}})$ 之间的关系为:$$(x,y) = ({x_{\rm{Ref}}} + {d_{\rm{Rel}}},{y_{\rm{Ref}}})$$ (3) 式中:
${d_{\rm{Rel}}}$ 为目标图像相对于参考图像的相对视差值。从而可以以这一方式获得每个有效目标点的实际视差值$d$ :$$d = {d_{\rm{Ref}}} + {d_{\rm{Rel}}}$$ (4) 上述算法在具体操作中还有很多实际问题值得注意,例如:通过相关函数仅能计算出具有整像素精度的视差值,这在一定程度上限制了深度量化分辨率。因此,可以通过诸多子像素优化技术来改善这些估计,尽管它们增加了计算复杂度,如成本聚合[134-135]、半全局匹配[136]、五点拟合算法[137]和左右一致性检验。如图33所示,通常在基于局部图像的匹配算法之后,对一个支持窗口内的匹配成本进行聚合从而得到参考图像上一点
$p$ 在视差$d$ 处的累积成本$CA(p,d)$ ,这一过程称为成本聚合。通过匹配成本聚合,可以降低异常点的影响,提高视差图的信噪比进而提高匹配精度。代价聚合策略通常是局部匹配算法的核心,策略的好坏直接关系到最终视差图的质量。如图34所示,在执行匹配成本聚合后,为了进一步减轻匹配的歧义性,应采用具有平滑度约束的多方向扫描线优化方法,即半全局匹配。这里考虑使用参考图像上一点$p$ 的邻域视差数据来构造惩罚函数以增加视差图的平滑性。在半全局匹配方法中,一般地,使参考图像上一点$p$ 的单个方向上能量最小化,并在多个方向上重复(一般是四个方向或十六个方向),然后通过平均得出最终结果。另一方面,局部窗口尺寸选择也是很有讲究的[138]。图35显示了不同尺寸的局部窗口对深度图计算结果的影响。从图中也不难发现:小尺寸的窗口:精度更高、细节更丰富,但是对噪声特别敏感;大尺寸的窗口:精度不高、细节不够,但是对噪声比较鲁棒。此外,窗口越大,计算量也会相应的增加,从而大大延长运算时间。此外,不同匹配代价的处理能力也各自不同,而复合代价能够使它们相互补充,提高算法稳定性。例如,对于基于census变换的匹配函数,它容易在具有重复局部结构的区域中产生错误的匹配,而基于AD的匹配函数无法处理大型无纹理区域。而复合的ADcensus匹配函数能成功减少由单个匹配函数引起的误差[134]。但是,如何确定各个匹配成本的权重依然是一个值得考虑的问题。
图 35 不同尺寸的局部窗口对视差图计算结果的影响
Figure 35. Effect of local windows with different sizes on the calculation results of disparity maps
除了基于局部窗口的立体匹配方法外,另一种比较主流的方法是全局立体匹配的方法。全局立体匹配算法主要是采用了全局的优化理论方法估计视差,建立一个全局能量函数,其包含一个数据项和平滑项,通过最小化全局能量函数得到最优的视差值。其中图割(Graph Cuts, GC)[139]、置信传播(Belief Propagation,BP)[140]、动态规划(Dynamic Programming, DP)[141-142]等优化算法都是常用的求解能量最小化的方法。全局匹配算法一般定义如下能量函数:
$$\begin{gathered} E(d) = {E_{\rm data}}(d) + {E_{\rm smooth}}(d) \\ = \sum\limits_{p \in R} {C(p,d)} + \sum\limits_{q,p \in R} {P({d_q} - {d_p})} \\ \end{gathered} $$ (5) 式中:
${E_{\rm data}}(d)$ 描述了匹配程度;平滑项${E_{\rm smooth}}(d)$ 体现了场景的平滑约束;$C(p,d)$ 是匹配成本;$P({d_q} - {d_p})$ 是点$p$ 和$q$ 的视差之间的函数,一般称之为惩罚项,当点$p$ 和$q$ 的视差不相等时,$P({d_q} - {d_p}) > 0$ ,且与两者差值越大,$P({d_q}\! -\! {d_p})$ 值越大。当点$p$ 和$q$ 的视差相等时,$P({d_q} \!-\! {d_p})\! =\! 0$ 。由于全局匹配算法在数学上是一个能量函数的优化问题,因此可以找到最优解。但是,这个问题被证明在二维空间是NP-hard的。因此,虽然全局算法具有准确性较高的优点,但其计算速度非常慢,在实时性要求高的场合不适合使用全局立体匹配算法。 -
无论是双目系统还是单目系统,一旦相机捕获到的目标图像中的所有有效像素的视差值被计算得到,就可以执行三维重建操作实现计算深度图。首先,从理想的双目相机成像模型开始分析:假设左右两个相机位于同一平面(光轴平行),且相机参数(如焦距
$f$ )一致。在获取到具有亚像素精度的视差图后,通过三角测量关系如图36所示,可获得三维目标点$P(X,Y,Z)$ 的深度值$Z$ :图 36 基于双目视觉系统的三角测量模型示意图
Figure 36. Schematic diagram of triangulation measurement model based on binocular vision system
$$Z = \frac{{Bf}}{d}$$ (6) 式中:
$B$ 为测量系统的基线距离;$f$ 为相机的焦距。对于双目相机系统,基线距离为两个相机的物体光心之间的水平距离。对于单目相机系统,基线距离则为相机和点阵投射器的物体光心之间的水平距离。这些参数可以通过相机标定获得。接下来介绍相机的标定模型。相机是基于结构光的3D成像系统的重要设备。图37所示即为相机的针孔模型,在该模型中,总共存在四个坐标系:图像像素坐标系
$O{\rm{ - }}uv$ ,图像物理坐标系$O'{\rm{ - }}xy$ ,相机坐标系${O_c}{\rm{ - }}{x_c}{y_c}{z_c}$ 以及世界坐标系${O_w}{\rm{ - }}{X_w}{Y_w}{Z_w}$ 。图像像素坐标系是以相机CCD面阵左上角的像素为坐标原点,单位是pixel;图像物理坐标系一般是以相机CCD面阵中间的某一个点为坐标原点,单位为mm;相机坐标系以光学镜头的中心作为坐标原点,光轴经过坐标原点${O_C}$ 垂直于镜头平面;世界坐标系则是在标定过程中自定义出来的一个坐标系,单位是mm。假设在世界坐标系下有一个点
$P$ ,其在世界坐标系下的坐标为$({x_w},{y_w},{z_w})$ ,其在相机相机坐标系下的坐标为$({x_c},{y_c},{z_c})$ 。根据针孔成像模型,点$P$ 在CCD面阵平面上的透视投影点${P'}$ 为该点与透镜中心点${O_C}$ 的连线跟CCD面阵平面的交点。点${P'}$ 在图像像素坐标系和图像物理坐标系下的坐标分别$(u,v)$ 、$(x,y)$ 。它们存在如下关系:$$\left[ \begin{gathered} u \\ v \\ 1 \\ \end{gathered} \right] = \left[ {\begin{array}{*{20}{c}} {{s_x}}&0&{{u_0}} \\ 0&{{s_y}}&{{v_0}} \\ 0&0&1 \end{array}} \right]\left[ \begin{gathered} x \\ y \\ 1 \\ \end{gathered} \right]$$ (7) 式中:
${s_x}$ 和${s_y}$ 为图像像素坐标系下分别在$u$ 轴和$v$ 轴方向上的像素密度,单位为pixel/mm。$({u_0},{v_0})$ 为光轴与相机CCD平面交点${O'}$ 的像素坐标系坐标,表示两个坐标系之间的偏移量。在利用标定板标定相机时,会自定义一个三维空间中的世界坐标系,该坐标系的X-Y平面就是标定板所在的平面,垂直于标定板平面的方向即是该坐标系下的Z轴方向。假设该世界坐标系变换到相机坐标系的旋转矩阵和平移矩阵分别为
$R$ 、$T$ ,则点$P$ 的世界坐标系坐标$({x_w},{y_w},{z_w})$ 转化为相机坐标系坐标$({x_c},{y_c},{z_c})$ 的转换关系可表示为:$$\left[ \begin{gathered} {x_c} \\ {y_c} \\ {z_c} \\ 1 \\ \end{gathered} \right] = \left[ {\begin{array}{*{20}{c}} R&T \\ 0&1 \end{array}} \right]\left[ \begin{gathered} {x_w} \\ {y_w} \\ {z_w} \\ 1 \\ \end{gathered} \right]$$ (8) 通过针孔成像模型的比例关系获得:
$${z_c}\left[ \begin{gathered} x \\ y \\ 1 \\ \end{gathered} \right] = \left[ {\begin{array}{*{20}{c}} f&0&{{u_0}}&0 \\ 0&f&{{v_0}}&0 \\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} R&T \\ 0&1 \end{array}} \right]\left[ \begin{gathered} {x_w} \\ {y_w} \\ {z_w} \\ 1 \\ \end{gathered} \right]$$ (9) 联立以上三个公式,不难得出世界坐标系跟图像像素坐标系之间的数学转换关系:
$${z_c}\left[ \begin{gathered} x \\ y \\ 1 \\ \end{gathered} \right] = \left[ {\begin{array}{*{20}{c}} {{f_x}}&0&{{u_0}}&0 \\ 0&{{f_y}}&{{v_0}}&0 \\ 0&0&1&0 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} R&T \\ 0&1 \end{array}} \right]\left[ \begin{gathered} {x_w} \\ {y_w} \\ {z_w} \\ 1 \\ \end{gathered} \right]$$ (10) 式中:
${f_x} = f{s_x}$ ,${f_y} = f{s_y}$ ,旋转矩阵$R$ 和平移矩阵$T$ 是外部参数,由相机的空间位置和标定过程中自定义的世界坐标系的空间位置之间的相对位置决定。因此,对于双目立体视觉系统,当已知两个相机的标定参数和两个相机之间每个像素的匹配点对后,由于每一匹配点对对应着世界坐标系中同一点,因此可以通过公式(10)求解出物体的三维信息。 -
前面几个小节已对结构光3D传感器的相关原理、关键器件、核心算法进行了介绍。器件的设计与选型需要基于测量的原理来设计,而算法的顺畅运行则需要合适的算法平台作为支撑。此节对小型化结构光3D传感器算法平台所涉及的硬件系统架构进行介绍。
一个典型的3D结构光传感器的系统构架如图38所示,可以根据具体功能划分为以下四个主要部分:彩色相机、红外相机、红外投射器、算法平台。算法平台以外的部分文中已经谈及,此节主要针对算法平台进行介绍。
算法平台主要负责将获取的散斑图或者其他包含结构光信息的图片进行处理以获取场景的3D深度信息。其主要完成三个功能:一是驱动彩色相机、红外相机,以及红外投射器正常工作(参考图38中的链路关系);二是接收彩色相机和红外相机数据进行深度图像处理;三是将计算出来的深度数据输出。
目前主流的移动端结构光3D传感器多基于散斑相关算法求解深度信息,因此3D数据的获取都需要大量的运算。基于PC端的算法计算可以通过GPU并行计算实现加速。如今面对市场上移动端设备对3D测量的需求,3D传感器厂家设计研发了专用的ASIC芯片以专门支持基于散斑图相关的算法,如奥比中光的MX系列芯片、华捷艾米的IMI系列芯片等。围绕自主设计研发的3D感知芯片,配合相应的数据传输接口即组成了算法平台所涉及的完整硬件系统架构。
文中以华捷艾米的一款产品为例介绍其硬件系统架构,如图39所示。其中IMI1180是针对深度图像处理算法设计的芯片,该芯片不仅支持红外与彩色摄像头数据输入,而且集成了USB2.0控制器,能够驳接不同的硬件平台。硬件平台输出的数据通过USB Hub传出,方便后续功能开发使用。随着3D传感器市场的逐渐扩大,其相关设备的出货量也稳步增长,芯片研发的高投入也将获得回报。
此外通用的开发平台,如FPGA同样可以实现3D实时的运算处理,但由于其成本较高以及芯片供应依赖进口等问题限制了基于该平台的3D传感器的大批量生产的能力。当然,面对快速发展的3D传感器市场,选取通用开发平台进行前期验证与小规模打样仍然是一种普遍的选择。此外,小型企业也可以通过通用的硬件平台快速实现移动端设备的开发。以如图40所示的图漾科技的一款设备为例,其使用了FPGA芯片作为算法运行的硬件平台。与自主研制的专用算法芯片相比,由于通用硬件平台往往具备可重复编程的功能,因此算法具有可升级的可能。从图40中可以看出,该款设备使用了Intel公司的飓风5代FPGA芯片作为算法平台,该平台内嵌了通用的相机驱动接口,故IR Camera与RGB Camera可以直接连接到FPGA的通用I/O上。这不仅省去了专用相机驱动芯片的成本,也大大减轻了硬件开发人员的工作量。由于FPGA本身不具备与其他硬件平台通信的能力,故需要配合USB 2.0控制器(如CY68013A等)将算法输出的结果传出。
Has 3D finally come of age? ——An introduction to 3D structured-light sensor
-
摘要: 三维成像与传感技术作为感知真实三维世界的重要信息获取手段,为重构物体真实几何形貌及后续的三维建模、检测、识别等方面提供了数据基础。近年来,计算机视觉和光电成像技术的发展以及消费电子与个人身份验证对3D传感技术日益增长的需求促进了三维成像与传感技术的蓬勃式发展。2D摄像头向3D传感器的转变也将成为继黑白到彩色、低分辨率到高分辨率、静态图像到动态影像后的“第四次影像革命”。《红外与激光工程》本期策划组织的“光学三维成像与传感”专题,共包含高水平稿件20篇,其中综述论文15篇,研究论文5篇。这些论文系统介绍了光学三维成像传感领域热点专题的研究进展与最新动态,主题全面涵盖了当前三维光学成像领域的前沿研究方向:结构光三维成像、条纹投影轮廓术、干涉测量技术、相位测量偏折术、三维立体显示技术(全息显示、集成光场显示等)、三维成像传感技术与计算成像相关交叉领域(如三维鬼成像)等。而此文作为本期专栏的引子,概括性地综述了典型的三维传感技术,并着重介绍了三维结构光传感器技术的发展现状、关键技术、典型应用;讨论了其现存问题、并展望了其未来发展方向,以求抛砖引玉。Abstract: Three-dimensional (3D) imaging and sensing technologies, as valuable information acquisition tools for perceiving the real 3D world, provide data bases for the reconstruction of the geometric shape of objects and subsequent 3D modeling, detection, and recognition. Recently the development of computer vision and optoelectronic imaging technology, as well as the growing demand for 3D technologies in consumer electronics and personal authentication, have promoted the thriving growth of 3D imaging and sensing technologies. After the imaging revolution from monochrome to color, low resolution to high resolution, and static image to dynamic video, the transition of the camera from 2D to 3D will become the new "fourth imaging revolution." This issue of "Infrared and Laser Engineering" organizes a special topic on "Optical 3D Imaging and Sensing", which contains 20 high-quality articles, including 15 review papers and 5 research papers. These papers systematically introduce the research progress or trends of the cutting-edge research topics in the field of optical 3D imaging and sensing, and their themes comprehensively cover the current hot research directions in the field of 3D optical imaging: structured-light 3D imaging, fringe projection profilometry, interferometry, phase measuring deflectometry, 3D display technologies (such as holographic display, and integral/light field display), and the interdisciplinary fields of 3D sensing technologies and computational imaging technologies (such as 3D ghost imaging). As the preface of this issue, this paper summarizes the typical 3D sensing technologies and focuses on the current status, key technologies, and typical applications of the 3D structured-light sensor technologies, discusses its existing challenges, and looks forward to its future development directions.
-
Key words:
- structured-light /
- 3D imaging /
- 3D measurement /
- 3D structured-light sensor
-
图 62 对气枪发射的子弹的三维测量与跟踪[235]
。(a)不同时间点的相机图像;(b)相应3D重建结果;(c)枪口区域的3D重建(对应于图(b)中所示的盒装区域)以及在飞行过程中的三个不同时间点的子弹(7.5 ms,12.6 ms和17.7 ms)的3D重建。插图显示在17.7 ms处穿过飞行子弹中心的水平(x-z)和垂直(y-z)轮廓;(d)最后时刻(135 ms)场景的3D点云,彩色线显示130 ms长的子弹轨迹。插图为子弹速度-时间的图 Figure 62. 3D measurement and tracking a bullet fired from a toy gun. (a) Camera images at different time points; (b) Corresponding 3D reconstructions; (c) 3D reconstruction of the muzzle region (corresponding to the boxed region shown in (b)) as well as the bullet at three different points of time over the course of flight (7.5 ms, 12.6 ms, and 17.7 ms). The insets show the horizontal (x–z) and vertical (y-z) profiles crossing the body center of the flying bullet at 17.7 ms; (d) 3D point cloud of the scene at the last moment (135 ms), with the colored line showing the 130 ms long bullet trajectory. The inset plots the bullet velocity as a function of time
图 63 5D高光谱成像系统、结果及高速热成像系统及结果[244, 245]。(a) 5D高光谱成像系统;(b)高速热成像系统;(c) 5D高光谱成像结果:对柑橘植物的吸水性的测量;(d) 高速热成像结果:不同时间对篮球运动员的测量
Figure 63. Systems and results of 5D hyperspectral imaging and high speed thermal imaging[244, 245]. (a) 5D hyperspectral imaging system; (b) High speed thermal imaging system; (c) 5D hyperspectral imaging results: the measurement of water absorption by a citrus plant; (d) High-speed thermal imaging results: the measurement of a basketball player at different times
表 1 基于互相关准则的匹配函数
Table 1. Matching function based on cross-correlation criteria
CC correlation criterion Definition Cross-correlation (CC) ${C_{CC}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {[f({x_i},{y_i})g(x_i',y_j')]} } $ Normalized cross-correlation (NCC) ${C_{NCC}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {[\dfrac{{f({x_i},{y_i})g(x_i',y_j')}}{{\bar f\bar g}}]} } $ Zero-normalized cross-correlation (ZNCC) ${C_{ZNCC}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {\left\{ {\dfrac{{\left[ {f({x_i},{y_i}) - {f_m}} \right] \times \left[ {g(x_i',y_j') - {g_m}} \right]}}{{\Delta f\Delta g}}} \right\}} } $ 表 2 基于SSD相关准则的匹配函数
Table 2. Matching function based on SSD-correlation criteria
SSD correlation criterion Definition Sum of squared differences (SSD) ${C_{SSD}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {{{[f({x_i},{y_i}) - g(x_i',y_j')]}^2}} } $ Normalized sum of squared differences (NSSD) ${C_{NSSD}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {{{\left[ {\dfrac{{f({x_i},{y_i})}}{{\bar f}} - \dfrac{{g(x_i',y_j')}}{{\bar g}}} \right]}^2}} } $ Zero-normalized sum of squared differences (ZNSSD) ${C_{ZNSSD}} = \displaystyle\sum\limits_{i = - M}^M {\displaystyle\sum\limits_{j = - M}^M {{{\left[ {\dfrac{{f({x_i},{y_i}) - {f_m}}}{{\Delta f}} - \dfrac{{g(x_i',y_j') - {g_m}}}{{\Delta g}}} \right]}^2}} } $ -
[1] 央视315晚会幕后: 揭秘人脸识别破解始末[EB/OL]. [2020-01-09]. http://science.china.com.cn/2017-03/15/content_9390265.htm. [2] CIS 2019 网络安全创新大会[EB/OL]. [2020-01-09]. https://cis.freebuf.com/. [3] Woodham R J. Photometric method for determining surface orientation from multiple images [J]. Optical Engineering, 1980, 19(1): 191139. [4] Christensen P H, Shapiro L G. Three-dimensional shape from color photometric stereo [J]. International Journal of Computer Vision, 1994, 13(2): 213−227. doi: 10.1007/BF01427152 [5] Deresiewicz H, Skalak R. On uniqueness in dynamic poroelasticity [J]. Bulletin of the Seismological Society of America, 1963, 53(4): 783−788. [6] Coleman Jr E N, Jain R. Obtaining 3-dimensional shape of textured and specular surfaces using four-source photometry [J]. Computer Graphics and Image Processing, 1982, 18(4): 309−328. doi: 10.1016/0146-664X(82)90001-6 [7] Park J S, Tou J T. Highlight separation and surface orientations for 3-D specular objects[C]//10th International Conference on Pattern Recognition. IEEE, 1990, 1: 331–335. [8] Ikeuchi K. Determining surface orientations of specular surfaces by using the photometric stereo method [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1981(6): 661−669. [9] Wu T P, Tang C K. Dense photometric stereo using a mirror sphere and graph cut[C] //2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, 1: 140–147. [10] Mozerov M G, van de Weijer J. Accurate stereo matching by two-step energy minimization [J]. IEEE Transactions on Image Processing, 2015, 24(3): 1153−1163. doi: 10.1109/TIP.2015.2395820 [11] Geiger A, Roser M, Urtasun R. Efficient large-scale stereo matching[C]//Asian Conference on Computer Vision, 2010: 25–38. [12] Tan X, Sun C, Wang D, et al. Soft cost aggregation with multi-resolution fusion[C]//European Conference on Computer Vision, 2014: 17–32. [13] Yang Q, Yang R, Davis J, et al. Spatial-depth super resolution for range images[C]// 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007: 1–8. [14] Yoon K J, Kweon I S. Adaptive support-weight approach for correspondence search [J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2006(4): 650−656. [15] Hosni A, Rhemann C, Bleyer M, et al. Fast cost-volume filtering for visual correspondence and beyond [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(2): 504−511. [16] Yang Q, Wang L, Yang R, et al. Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 31(3): 492−504. [17] Klaus A, Sormann M, Karner K. Segment-based stereo matching using belief propagation and a self-adapting dissimilarity measure[C]//18th International Conference on Pattern Recognition (ICPR’06), 2006, 3: 15–18. [18] Bertozzi M, Broggi A. GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection [J]. IEEE Transactions on Image Processing, 1998, 7(1): 62−81. doi: 10.1109/83.650851 [19] Loop C, Zhang Z. Computing rectifying homographies for stereo vision[C]//1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), 1999, 1: 125–131. [20] Gehrig S K, Eberli F, Meyer T. A real-time low-power stereo vision engine using semi-global matching[C]//International Conference on Computer Vision Systems, 2009: 134–143. [21] Dorrington A A, Kelly C D B, McClure S H, et al. Advantages of 3D time-of-flight range imaging cameras in machine vision applications[C]//16th New Zealand Conference (ENZCon), 2009: 18–20. [22] Ganapathi V, Plagemann C, Koller D, et al. Real time motion capture using a single time-of-flight camera[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010: 755–762. [23] Hsu S, Acharya S, Rafii A, et al. Advanced Microsystems for Automotive Applications 2006[M]. Berlin: Springer, 2006: 205–219. [24] Shim H, Lee S. Performance evaluation of time-of-flight and structured light depth sensors in radiometric/geometric variations [J]. Optical Engineering, 2012, 51(9): 094401. [25] Hahne U, Alexa M. Depth imaging by combining time-of-flight and on-demand stereo[C]//Workshop on Dynamic 3D Imaging, 2009: 70–83. [26] Schuon S, Theobalt C, Davis J, et al. High-quality scanning using time-of-flight depth superresolution[C]//2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008: 1–7. [27] Cui Y, Schuon S, Thrun S, et al. Algorithms for 3d shape scanning with a depth camera [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 35(5): 1039−1050. [28] Zhang Zuxun, Zhang Jianqing. Solutions and core techniques of city modeling[D]. Wuhan: Wuhan University, 2003. (in Chinese) [29] Wang Jizhou, Li Chengming, Lin Zongjian. A survey on the technology of three dimensional spatial data acquisition[D]. Beijing: Chinese Academy of Surveying and Mapping, 2004. (in Chinese) [30] Yu Lewen, Zhang Da, Yu Bin, et al. Research of 3D laser scanning measurement system for mining [J]. Metal Mine, 2012, 436: 101−103. (in Chinese) [31] Gao Zhiguo. The research of terrestrial laser scanning data processing and modeling[D]. Xi'an: Chang'an University, 2010. (in Chinese) [32] Fang Wei. Research on automatic texture mapping of terrestrial laser scanning data combining photogrammetry techniques[D]. Wuhan: Wuhan University. (in Chinese) [33] Nayar S K, Watanabe M, Noguchi M. Real-time focus range sensor [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1996, 18(12): 1186−1198. doi: 10.1109/34.546256 [34] Watanabe M, Nayar S K. Rational filters for passive depth from defocus [J]. International Journal of Computer Vision, 1998, 27(3): 203−225. doi: 10.1023/A:1007905828438 [35] Kou L, Zhang L, Zhang K, et al. A multi-focus image fusion method via region mosaicking on Laplacian pyramids [J]. PloS One, 2018, 13(5): e0191085. doi: 10.1371/journal.pone.0191085 [36] Bailey S W, Echevarria J I, Bodenheimer B, et al. Fast depth from defocus from focal stacks [J]. The Visual Computer, 2015, 31(12): 1697−1708. doi: 10.1007/s00371-014-1050-2 [37] Geng J. Structured-light 3D surface imaging: a tutorial [J]. Advances in Optics and Photonics, 2011, 3(2): 128−160. doi: 10.1364/AOP.3.000128 [38] Zuo C, Feng S, Huang L, et al. Phase shifting algorithms for fringe projection profilometry: A review [J]. Optics and Lasers in Engineering, 2018, 109: 23−59. doi: 10.1016/j.optlaseng.2018.04.019 [39] Boyer K L, Kak A C. Color-encoded structured light for rapid active ranging [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1987(1): 14−28. [40] Zhang L, Curless B, Seitz S M. Rapid shape acquisition using color structured light and multi-pass dynamic programming[C]//First International Symposium on 3D Data Processing Visualization and Transmission, 2002: 24–36. [41] Pages J, Salvi J, Collewet C, et al. Optimised De Bruijn patterns for one-shot shape acquisition [J]. Image and Vision Computing, 2005, 23(8): 707−720. doi: 10.1016/j.imavis.2005.05.007 [42] Ito M, Ishii A. A three-level checkerboard pattern (TCP) projection method for curved surface measurement [J]. Pattern Recognition, 1995, 28(1): 27−40. doi: 10.1016/0031-3203(94)E0047-O [43] Maruyama M, Abe S. Range sensing by projecting multiple slits with random cuts [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993, 15(6): 647−651. doi: 10.1109/34.216735 [44] Morita H, Yajima K, Sakata S. Reconstruction of surfaces of 3d objects by m-array pattern projection method[C]//Second International Conference on Computer Vision, 1988: 468–473. [45] Posdamer J L, Altschuler M. Surface measurement by space-encoded projected beam systems [J]. Computer Graphics and Image Processing, 1982, 18(1): 1−17. doi: 10.1016/0146-664X(82)90096-X [46] Caspi D, Kiryati N, Shamir J. Range imaging with adaptive color structured light [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(5): 470−480. doi: 10.1109/34.682177 [47] Sansoni G, Corini S, Lazzari S, et al. Three-dimensional imaging based on gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications [J]. Applied Optics, 1997, 36(19): 4463−4472. doi: 10.1364/AO.36.004463 [48] Zhang Z. Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques [J]. Optics and Lasers in Engineering, 2012, 50(8): 1097−1106. doi: 10.1016/j.optlaseng.2012.01.007 [49] Je C, Lee S W, Park R-H. High-contrast color-stripe pattern for rapid structured-light range imaging[C]//European Conference on Computer Vision, 2004: 95–107. [50] Geng Z J. Rainbow three-dimensional camera: new concept of high-speed three-dimensional vision systems [J]. Optical Engineering, 1996, 35(2): 376−384. doi: 10.1117/1.601023 [51] Salvi J, Pagès J, Batlle J. Pattern codification strategies in structured light systems [J]. Pattern Recognition, 2004, 37(4): 827−849. doi: 10.1016/j.patcog.2003.10.002 [52] Gorthi S S, Rastogi P. Fringe projection techniques: Whither we are? [J]. Optics & Lasers in Engineering, 2010, 48(2): 133−140. [53] Reich C, Ritter R, Thesing J. 3-D shape measurement of complex objects by combining photogrammetry and fringe projection [J]. Optical Engineering, 2000, 39(1): 224−232. doi: 10.1117/1.602356 [54] Huang P S, Zhang C, Chiang F-P. High-speed 3-D shape measurement based on digital fringe projection [J]. Optical Engineering, 2003, 42(1): 163−169. doi: 10.1117/1.1525272 [55] Pan B, Kemao Q, Huang L, et al. Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry [J]. Optics Letters, 2009, 34(4): 416−418. doi: 10.1364/OL.34.000416 [56] Quan C, He X, Wang C, et al. Shape measurement of small objects using LCD fringe projection with phase shifting [J]. Optics Communications, 2001, 189(1-3): 21−29. doi: 10.1016/S0030-4018(01)01038-0 [57] Zhang Z, Towers C E, Towers D P. Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection [J]. Optics Express, 2006, 14(14): 6444−6455. doi: 10.1364/OE.14.006444 [58] Wang Z, Nguyen D A, Barnes J C. Some practical considerations in fringe projection profilometry [J]. Optics & Lasers in Engineering, 2010, 48(2): 218−225. [59] Pan J, Huang P S, Chiang F-P. Color-coded binary fringe projection technique for 3-D shape measurement [J]. Optical Engineering, 2005, 44(2): 023606. doi: 10.1117/1.1840973 [60] Kühmstedt P, Munckelt C, Heinze M, et al. 3D shape measurement with phase correlation based fringe projection[C]//SPIE, 2007, 6616: 66160B. [61] Liu H C, Halioua M, Srinivasan V. Automated phase-measuring profilometry of 3-D diffuse objects [J]. Applied Optics, 1984, 23(18): 3105. doi: 10.1364/AO.23.003105 [62] Takeda M, Ina H, Kobayashi S. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry [J]. JOSA, 1982, 72(1): 156−160. doi: 10.1364/JOSA.72.000156 [63] Su X, Chen W. Fourier transform profilometry: a review [J]. Optics and Lasers in Engineering, 2001, 35(5): 263−284. doi: 10.1016/S0143-8166(01)00023-9 [64] Su X, Zhang Q. Dynamic 3-D shape measurement method: A review [J]. Optics and Lasers in Engineering, 2010, 48(2): 191−204. [65] Kemao Q. Two-dimensional windowed Fourier transform for fringe pattern analysis: principles, applications and implementations [J]. Optics and Lasers in Engineering, 2007, 45(2): 304−317. doi: 10.1016/j.optlaseng.2005.10.012 [66] Kemao Q. Windowed Fourier transform for fringe pattern analysis [J]. Applied Optics, 2004, 43(13): 2695−2702. doi: 10.1364/AO.43.002695 [67] Zhong J, Weng J. Spatial carrier-fringe pattern analysis by means of wavelet transform: wavelet transform profilometry [J]. Applied Optics, 2004, 43(26): 4993−4998. doi: 10.1364/AO.43.004993 [68] Malacara D. Optical Shop Testing[M]. New York: John Wiley & Sons, 2007. [69] Bruning J H, Herriott D R, Gallagher J, et al. Digital wavefront measuring interferometer for testing optical surfaces and lenses [J]. Applied Optics, 1974, 13(11): 2693−2703. doi: 10.1364/AO.13.002693 [70] Su X Y, Bally G V, Vukicevic D. Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex objects evaluation [J]. Optics Communications, 1993, 98(1-3): 141−150. doi: 10.1016/0030-4018(93)90773-X [71] Li J, Hassebrook L G, Guan C. Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity [J]. Journal of the Optical Society of America A , 2003, 20(1): 106−15. [72] Zhang S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques [J]. Optics and Lasers in Engineering, 2010, 48(2): 149−158. [73] Van der Jeught S, Dirckx J J. Real-time structured light profilometry: a review [J]. Optics and Lasers in Engineering, 2016, 87: 18−31. doi: 10.1016/j.optlaseng.2016.01.011 [74] Su X, Chen W. Reliability-guided phase unwrapping algorithm: a review [J]. Optics and Lasers in Engineering, 2004, 42(3): 245−261. doi: 10.1016/j.optlaseng.2003.11.002 [75] Gutmann B, Weber H. Phase unwrapping with the branch-cut method: role of phase-field direction [J]. Applied Optics, 2000, 39(26): 4802−4816. doi: 10.1364/AO.39.004802 [76] Zappa E, Busca G. Comparison of eight unwrapping algorithms applied to Fourier-transform profilometry [J]. Optics and Lasers in Engineering, 2008, 46(2): 106−116. doi: 10.1016/j.optlaseng.2007.09.002 [77] Ghiglia D C, Romero L A. Minimum Lp-norm two-dimensional phase unwrapping [J]. JOSA A, 1996, 13(10): 1999−2013. doi: 10.1364/JOSAA.13.001999 [78] Trouve E, Nicolas J-M, Maitre H. Improving phase unwrapping techniques by the use of local frequency estimates [J]. IEEE Transactions on Geoscience and Remote Sensing, 1998, 36(6): 1963−1972. doi: 10.1109/36.729368 [79] Zebker H A, Lu Y. Phase unwrapping algorithms for radar interferometry: residue-cut, least-squares, and synthesis algorithms [J]. JOSA A, 1998, 15(3): 586−598. doi: 10.1364/JOSAA.15.000586 [80] Huntley J M, Saldner H. Temporal phase-unwrapping algorithm for automated interferogram analysis [J]. Applied Optics, 1993, 32(17): 3047−3052. doi: 10.1364/AO.32.003047 [81] Gushov V, Solodkin Y N. Automatic processing of fringe patterns in integer interferometers [J]. Optics and Lasers in Engineering, 1991, 14(4-5): 311−324. doi: 10.1016/0143-8166(91)90055-X [82] Sansoni G, Carocci M, Rodella R. Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors [J]. Appl Opt, 1999, 38(31): 6565−6573. doi: 10.1364/AO.38.006565 [83] Zhao H, Chen W, Tan Y. Phase-unwrapping algorithm for the measurement of three-dimensional object shapes [J]. Applied Optics, 1994, 33(20): 4497−4500. doi: 10.1364/AO.33.004497 [84] Cheng Y-Y, Wyant J C. Two-wavelength phase shifting interferometry [J]. Applied Optics, 1984, 23(24): 4539−4543. doi: 10.1364/AO.23.004539 [85] Creath K, Cheng Y Y, Wyant J C. Contouring aspheric surfaces using two-wavelength phase-shifting interferometry [J]. Optica Acta: International Journal of Optics, 1985, 32(12): 1455−1464. doi: 10.1080/713821689 [86] Burke J, Bothe T, Osten W, et al. Reverse engineering by fringe projection[C]// SPIE, 2002, 4778: 312–325. [87] Ding Y, Xi J, Yu Y, et al. Recovering the absolute phase maps of two fringe patterns with selected frequencies [J]. Optics Letters, 2011, 36(13): 2518−2520. doi: 10.1364/OL.36.002518 [88] Falaggis K, Towers D P, Towers C E. Algebraic solution for phase unwrapping problems in multiwavelength interferometry [J]. Applied Optics, 2014, 53(17): 3737−3747. doi: 10.1364/AO.53.003737 [89] Petković T, Pribanić T, Jonlić M. Temporal phase unwrapping using orthographic projection [J]. Optics and Lasers in Engineering, 2017, 90: 34−47. doi: 10.1016/j.optlaseng.2016.09.006 [90] Xing S, Guo H. Temporal phase unwrapping for fringe projection profilometry aided by recursion of Chebyshev polynomials [J]. Applied Optics, 2017, 56(6): 1591−1602. doi: 10.1364/AO.56.001591 [91] Li Z, Shi Y, Wang C, et al. Accurate calibration method for a structured light system [J]. Optical Engineering, 2008, 47(5): 053604. doi: 10.1117/1.2931517 [92] Saldner H O, Huntley J M. Temporal phase unwrapping: application to surface profiling of discontinuous objects [J]. Applied Optics, 1997, 36(13): 2770−2775. doi: 10.1364/AO.36.002770 [93] Martinez-Celorio R A, Davila A, Kaufmann G H, et al. Extension of the displacement measurement range for electronic speckle-shearing pattern interferometry using carrier fringes and a temporal-phase-unwrapping method [J]. Optical Engineering, 2000, 39(3): 751−758. doi: 10.1117/1.602423 [94] Huang L, Asundi A K. Phase invalidity identification framework with the temporal phase unwrapping method [J]. Measurement Science and Technology, 2011, 22(3): 035304. doi: 10.1088/0957-0233/22/3/035304 [95] Tian J, Peng X, Zhao X. A generalized temporal phase unwrapping algorithm for three-dimensional profilometry [J]. Optics and Lasers in Engineering, 2008, 46(4): 336−342. doi: 10.1016/j.optlaseng.2007.11.002 [96] Pedrini G, Alexeenko I, Osten W, et al. Temporal phase unwrapping of digital hologram sequences [J]. Applied Optics, 2003, 42(29): 5846−5854. doi: 10.1364/AO.42.005846 [97] Zuo C, Huang L, Zhang M, et al. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review [J]. Optics and Lasers in Engineering, 2016, 85: 84−103. [98] wii_百度百科[EB/OL]. [2020-01-10]. https://baike.baidu.com/item/wii/2285107?fr=aladdin. [99] 便携Wii概念-芭蕾体感掌机激发游戏潮[EB/OL]. [2020-01-12]. http://tech.hexun.com/2011-02-20/127427010.html. [100] Kinect[J]. Wikipedia, 2019. [101] Kinect销量60天破800万应用范围超越游戏[EB/OL]. [2020-01-08]. http://it.sohu.com/20110117/n278914946.shtml. [102] Techweb. 鲍尔默: Kinect上市60天销量达800万台[EB/OL]. [2020-01-08]. http://www.techweb.com.cn/world/2011-01-06/736155.shtml. [103] 家用游戏机越走越近[EB/OL]. [2020-01-12]. http://games.ifeng.com/pcgame/detail_2012_11/21/19381973_2.shtml. [104] Xbox | 官方网站[EB/OL]. [2020-01-10]. https://www.xbox.com/zh-CN. [105] Kinect终于停产了: 它是怎样最终失败的?[EB/OL]. [2020-01-10]. www.sohu.com/a/200412330_624619. [106] iPhoneX和三星Note8哪个值得买?[EB/OL]. [2020-01-12]. https://www.jb51.net/shouji/576894_3.html. [107] 买不起 iPhone X?你还可以选择 iPhone X ”青春版“[EB/OL]. [2020-01-12]. www.sohu.com/a/223349103_114837. [108] 支付宝刷脸设备补贴30亿取消, 新政策“不设上限上不封顶[EB/OL]. [2020-01-08]. https://weibo.com/ttarticle/p/show?id=2309404425892335058965#related. [109] 三维视觉终于火了!两年内从Face ID到刷脸支付, 下一个风口到了[EB/OL]. [2020-01-10]. https://xueqiu.com/9919963656/132900079. [110] 刷脸支付发展史, 微信、支付宝两大巨头的重金力推[EB/OL]. [2020-01-12]. www.sohu.com/a/341173479_120295819. [111] 坐地铁、取快递、领养老金……刷脸时代真的来了[EB/OL]. [2020-01-08]. www.sohu.com/a/308339470_395108. [112] “刷脸”时代来临: 坐地铁、取快递、领养老金[EB/OL]. [2020-01-08]. http://muji.bandao.cn/a/228954.html. [113] 双眼视觉和立体视觉[EB/OL]. [2020-01-10]. http://amuseum.cdstm.cn/AMuseum/perceptive/page_3_eye/page_3_2b-16.htm. [114] Snavely N, Seitz S M, Szeliski R. Photo tourism: exploring photo collections in 3D[C]//ACM SIGGRAPH 2006 Proceedings, Association for Computing Machinery, 2006: 835-846. [115] Snavely N, Seitz S M, Szeliski R. Modeling the world from internet photo collections [J]. International Journal of Computer Vision, 2008, 80(2): 189−210. doi: 10.1007/s11263-007-0107-3 [116] Westoby M J, Brasington J, Glasser N F, et al. 'Structure-from-Motion' photogrammetry: A low-cost, effective tool for geoscience applications [J]. Geomorphology, 2012, 179: 300−314. doi: 10.1016/j.geomorph.2012.08.021 [117] 想了解3D结构光, 看这份拆解就对了![EB/OL]. [2020-01-10]. https://www.jianshu.com/p/4365145add77. [118] 蚂里奥发布全新毫米级3D人脸感知解决方案[EB/OL]. [2020-01-12]. http://www.eepw.com.cn/article/201903/398768.htm. [119] Su X Y, Zhou W S, von Bally G, et al. Automated phase-measuring profilometry using defocused projection of a Ronchi grating [J]. Optics Communications, 1992, 94(6): 561−573. doi: 10.1016/0030-4018(92)90606-R [120] Beck M, Hofstetter D, Aellen T, et al. Continuous wave operation of a mid-infrared semiconductor laser at room temperature [J]. Science, 2002, 295(5553): 301−305. doi: 10.1126/science.1066408 [121] OSA | Recent Advances of VCSEL Photonics[EB/OL]. [2020-01-08]. https://www.osapublishing.org/jlt/abstract.cfm?uri=jlt-24-12-4502. [122] Swanson G J, Veldkamp W B. Diffractive optical elements for use in infrared systems [J]. Optical Engineering, 1989, 28(6): 286605. [123] Wyrowski F. Diffractive optical elements: iterative calculation of quantized, blazed phase structures [J]. JOSA A, 1990, 7(6): 961−969. doi: 10.1364/JOSAA.7.000961 [124] Wiedenmann D, Grabherr M, Jäger R, et al. High volume production of single-mode VCSELs[C]//SPIE, 2006, 6132: 613202. [125] VCSEL amplifier dot projector with folded-path slow-light waveguide for 3D depth sensing[EB/OL]. [2020-01-08]. https://ieeexplore.ieee.org/abstract/document/8516183. [126] Morinaga M, Gu X, Shimura K, et al. Compact dot projector based on folded path VCSEL amplifier for structured light sensing[C]//Conference on Lasers and Electro-Optics (2019), Optical Society of America, 2019: SM4N. 4. [127] VCSEL为何突然火了?[EB/OL]. [2020-01-10]. http://www.sohu.com/a/295863707_256868. [128] 前置3D成像将以结构光为主流四大部件难度各异[EB/OL]. [2020-01-10]. www.sohu.com/a/154077271_99935473. [129] Durdle N G, Thayyoor J, Raso V J. An improved structured light technique for surface reconstruction of the human trunk[C]//Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No. 98TH8341), 1998, 2: 874–877. [130] Chen C S, Hung Y P, Chiang C C, et al. Range data acquisition using color structured lighting and stereo vision [J]. Image and Vision Computing, 1997, 15(6): 445−456. doi: 10.1016/S0262-8856(96)01148-1 [131] MacWilliams F J, Sloane N J A. Pseudo-random sequences and arrays [J]. Proceedings of the IEEE, 1976, 64(12): 1715−1729. doi: 10.1109/PROC.1976.10411 [132] Salvi J, Batlle J, Mouaddib E. A robust-coded pattern projection for dynamic 3D scene measurement [J]. Pattern Recognition Letters, 1998, 19(11): 1055−1065. doi: 10.1016/S0167-8655(98)00085-3 [133] Pan B, Qian K, Xie H, et al. Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review [J]. Measurement Science and Technology, 2009, 20(6): 062001. doi: 10.1088/0957-0233/20/6/062001 [134] Mei X, Sun X, Zhou M, et al. On building an accurate stereo matching system on graphics hardware[C]// 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011: 467–474. [135] Zhang Ke, Lu Jiangbo, Lafruit G. Cross-based local stereo matching using orthogonal integral images [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2009, 19(7): 1073−1079. doi: 10.1109/TCSVT.2009.2020478 [136] Hirschmuller H. Accurate and efficient stereo processing by semi-global matching and mutual information[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, 2: 807–814. [137] Zhou P, Zhu J, Jing H. Optical 3-D surface reconstruction with color binary speckle pattern encoding [J]. Optics Express, 2018, 26(3): 3452. doi: 10.1364/OE.26.003452 [138] Pan B, Xie H, Wang Z, et al. Study on subset size selection in digital image correlation for speckle patterns [J]. Optics Express, 2008, 16(10): 7037. doi: 10.1364/OE.16.007037 [139] Boykov Y, Veksler O, Zabih R. Fast approximate energy minimization via graph cuts [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(11): 1222−1239. doi: 10.1109/34.969114 [140] Sun J, Zheng N N, Shum H Y. Stereo matching using belief propagation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(7): 14. [141] Jae Chul Kim, Kyoung Mu Lee, Byoung Tae Choi, et al. A dense stereo matching using two-pass dynamic programming with generalized ground control points[C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2005, 2: 1075–1082. [142] Forstmann S, Kanou Y, Jun Ohya, et al. Real-time stereo by using dynamic programming[C]// 2004 Conference on Computer Vision and Pattern Recognition Workshop, IEEE, 2004: 29–29. [143] 体感设备-北京华捷艾米科技有限公司[EB/OL]. [2020-01-12]. http://www.hjimi.com/?pro/tgsb/. [144] 产品参数信息 – 图漾科技-存在即被感知[EB/OL]. [2020-01-12]. https://www.percipio.xyz/dev_detail/?model_id=276. [145] 电子行业智能手机走向存量时代, 关注新技术渗透带来的投资机会系列十: TOF, 海外市场发展迅速, 国内产业链机会来临[R]. 广州: 广发证券, 2016. [146] 电子行业深度研究报告: 科技红利大时代8~3D摄像创新十年最夯, VCSEL引领激光应用里程碑[R]. 贵州: 华创证券, 2017. [147] 电子元器件行业: 为什么现在需要重视脸部识别[R]. 深圳: 安信证券, 2016 . [148] 技术剖解: 人脸识别三大优势与前景分析[EB/OL]. [2020-01-12]. http://tech.sina.com.cn/roll/2013-04-25/00592729625.shtml. [149] 2017百家VR公司巡礼[EB/OL]. [2020-01-12]. www.sohu.com/a/195441916_104421. [150] 佚名. 跑步进入未来?谈Leap Motion与人机交互[EB/OL]. [2020-01-12]. https://acc.pconline.com.cn/437/4375916.html. [151] 微软Hololens_百度百科[EB/OL]. [2020-01-10]. https://baike.baidu.com/item/%E5%BE%AE%E8%BD%AFHololens/16690972?fromtitle=Microsoft%20HoloLens&fromid=16630317. [152] 手势识别也是香饽饽[EB/OL]. [2020-01-10]. https://yq.aliyun.com/articles/599214. [153] 2018年全球VR/AR行业投资现状分析中国是重要投资目的地[EB/OL]. [2020-01-10]. https://www.qianzhan.com/analyst/detail/220/180606-41ef69ab.html. [154] 众趣科技只需90分钟就能为你克隆3D实景[EB/OL]. [2020-01-10]. www.sohu.com/a/228724895_99985415. [155] 美国公司制作3D面具成功骗过微信支付宝等人脸支付[EB/OL]. [2020-01-12]. https://www.shangyexinzhi.com/article/details/id-387726/. [156] 全球首款TOF手机来袭[EB/OL]. [2020-01-10]. www.sohu.com/a/247788268_115037. [157] Liu K, Wang Y, Lau D L, et al. Dual-frequency pattern scheme for high-speed 3-D shape measurement [J]. Optics Express, 2010, 18(5): 5229−5244. doi: 10.1364/OE.18.005229 [158] Zuo C, Chen Q, Gu G, et al. High-speed three-dimensional profilometry for multiple objects with complex shapes [J]. Optics Express, 2012, 20(17): 19493−19510. doi: 10.1364/OE.20.019493 [159] Weise T, Leibe B, Van Gool L. Fast 3D Scanning with automatic motion compensation[C]//Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, 2007: 1–8. [160] Li Z, Zhong K, Li Y F, et al. Multiview phase shifting: a full-resolution and high-speed 3D measurement framework for arbitrary shape dynamic objects [J]. Optics Letters, 2013, 38(9): 1389−1391. doi: 10.1364/OL.38.001389 [161] Tao T, Chen Q, Da J, et al. Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system [J]. Optics Express, 2016, 24(18): 20253−20269. doi: 10.1364/OE.24.020253 [162] Qian J, Tao T, Feng S, et al. Motion-artifact-free dynamic 3D shape measurement with hybrid Fourier-transform phase-shifting profilometry [J]. Optics Express, 2019, 27(3): 2713. doi: 10.1364/OE.27.002713 [163] Tao T, Chen Q, Feng S, et al. High-precision real-time 3D shape measurement based on a quad-camera system [J]. Journal of Optics, 2018, 20(1): 014009. doi: 10.1088/2040-8986/aa9e0f [164] Liu Z, Zibley P C, Zhang S. Motion-induced error compensation for phase shifting profilometry [J]. Optics Express, 2018, 26(10): 12632−12637. doi: 10.1364/OE.26.012632 [165] Feng S, Zuo C, Tao T, et al. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry [J]. Optics and Lasers in Engineering, 2018, 103: 127−138. doi: 10.1016/j.optlaseng.2017.12.001 [166] Zhang Y, Xiong Z, Yang Z, et al. Real-time scalable depth sensing with hybrid structured light illumination [J]. IEEE Transactions on Image Processing, 2013, 23(1): 97−109. [167] Li B, Liu Z, Zhang S. Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry [J]. Optics Express, 2016, 24(20): 23289. doi: 10.1364/OE.24.023289 [168] Liu X, Peng X, Chen H, et al. Strategy for automatic and complete three-dimensional optical digitization [J]. Optics Letters, 2012, 37(15): 3126. doi: 10.1364/OL.37.003126 [169] Song L, Ru Y, Yang Y, et al. Full-view three-dimensional measurement of complex surfaces [J]. Optical Engineering, 2018, 57(10): 1. [170] Nießner M, Zollhöfer M, Izadi S, et al. Real-time 3D reconstruction at scale using voxel hashing [J]. ACM Transactions on Graphics, 2013, 32(6): 1−11. [171] Epstein E, Granger-Piche M, Poulin P. Exploiting mirrors in interactive reconstruction with structured light[C]//Vision, Modeling, and Visualization, 2004: 125-132. [172] Lanman D, Crispell D, Taubin G. Surround structured lighting: 3-D scanning with orthographic illumination [J]. Computer Vision and Image Understanding, 2009, 113(11): 1107−1117. doi: 10.1016/j.cviu.2009.03.016 [173] Chen B, Pan B. Mirror-assisted panoramic-digital image correlation for full-surface 360-deg deformation measurement [J]. Measurement, 2019, 132: 350−358. doi: 10.1016/j.measurement.2018.09.046 [174] Holz D, Ichim A E, Tombari F, et al. Registration with the point cloud library: A modular framework for aligning in 3-D [J]. IEEE Robotics & Automation Magazine, 2015, 22(4): 110−124. [175] Mohammadzade H, Hatzinakos D. Iterative closest normal point for 3D face recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(2): 381−397. doi: 10.1109/TPAMI.2012.107 [176] Qian J, Feng S, Tao T, et al. High-resolution real-time 360° 3D model reconstruction of a handheld object with fringe projection profilometry [J]. Optics Letters, 2019, 44(23): 5751. doi: 10.1364/OL.44.005751 [177] Mariottini G L, Scheggi S, Morbidi F, et al. Planar mirrors for image-based robot localization and 3-D reconstruction [J]. Mechatronics, 2012, 22(4): 398−409. doi: 10.1016/j.mechatronics.2011.09.004 [178] Wang P, Wang J, Xu J, et al. Calibration method for a large-scale structured light measurement system [J]. Applied Optics, 2017, 56(14): 3995. doi: 10.1364/AO.56.003995 [179] Yin W, Feng S, Tao T, et al. Calibration method for panoramic 3D shape measurement with plane mirrors [J]. Optics Express, 2019, 27(25): 36538. doi: 10.1364/OE.27.036538 [180] Feng S, Chen Q, Zuo C, et al. Automatic identification and removal of outliers for high-speed fringe projection profilometry [J]. Optical Engineering, 2013, 52(1): 013605−013605. doi: 10.1117/1.OE.52.1.013605 [181] Lu J, Mo R, Sun H, et al. Invalid phase values removal method for absolute phase recovery [J]. Applied Optics, 2016, 55(2): 387−394. doi: 10.1364/AO.55.000387 [182] Lu J, Mo R, Sun H, et al. Simplified absolute phase retrieval of dual-frequency fringe patterns in fringe projection profilometry [J]. Optics Communications, 2016, 364: 101−109. doi: 10.1016/j.optcom.2015.11.022 [183] Wang H, Kemao Q, Soon S H. Valid point detection in fringe projection profilometry [J]. Optics Express, 2015, 23(6): 7535−7549. doi: 10.1364/OE.23.007535 [184] Yau S T. High dynamic range scanning technique [J]. Optical Engineering, 2009, 48(3): 033604. doi: 10.1117/1.3099720 [185] Qi Z, Wang Z, Huang J, et al. Improving the quality of stripes in structured-light three-dimensional profile measurement [J]. Optical Engineering, 2016, 56(3): 031208. doi: 10.1117/1.OE.56.3.031208 [186] Long Y, Wang S, Wu W, et al. Accurate identification of saturated pixels for high dynamic range measurement [J]. Optical Engineering, 2015, 54(4): 043106. doi: 10.1117/1.OE.54.4.043106 [187] Zhang B, Ouyang Y, Zhang S. High dynamic range saturation intelligence avoidance for three-dimensional shape measurement[C]//IEEE, 2015: 981–990. [188] Ekstrand L. Autoexposure for three-dimensional shape measurement using a digital-light-processing projector [J]. Optical Engineering, 2011, 50(12): 123603. doi: 10.1117/1.3662387 [189] Zhong K, Li Z, Zhou X, et al. Enhanced phase measurement profilometry for industrial 3D inspection automation [J]. The International Journal of Advanced Manufacturing Technology, 2015, 76(9-12): 1563−1574. doi: 10.1007/s00170-014-6360-z [190] Rao L, Da F. High dynamic range 3D shape determination based on automatic exposure selection [J]. Journal of Visual Communication and Image Representation, 2018, 50: 217−226. doi: 10.1016/j.jvcir.2017.12.003 [191] Song Z, Jiang H, Lin H, et al. A high dynamic range structured light means for the 3D measurement of specular surface [J]. Optics and Lasers in Engineering, 2017, 95: 8−16. doi: 10.1016/j.optlaseng.2017.03.008 [192] Feng S, Chen Q, Zuo C, et al. Fast three-dimensional measurements for dynamic scenes with shiny surfaces [J]. Optics Communications, 2017, 382: 18−27. doi: 10.1016/j.optcom.2016.07.057 [193] Waddington C, Kofman J. Saturation avoidance by adaptive fringe projection in phase-shifting 3D surface-shape measurement[C]// IEEE, 2010: 1–4. [194] Waddington C, Kofman J. Modified sinusoidal fringe-pattern projection for variable illuminance in phase-shifting three-dimensional surface-shape metrology [J]. Optical Engineering, 2014, 53(8): 084109. doi: 10.1117/1.OE.53.8.084109 [195] Zhang L, Chen Q, Zuo C, et al. High dynamic range 3D shape measurement based on the intensity response function of a camera [J]. Applied Optics, 2018, 57(6): 1378. doi: 10.1364/AO.57.001378 [196] Li D, Kofman J. Adaptive fringe-pattern projection for image saturation avoidance in 3D surface-shape measurement [J]. Optics Express, 2014, 22(8): 9887. doi: 10.1364/OE.22.009887 [197] Chen C, Gao N, Wang X, et al. Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection [J]. Measurement Science and Technology, 2018, 29(5): 055203. doi: 10.1088/1361-6501/aab07a [198] Lin H, Gao J, Mei Q, et al. Adaptive digital fringe projection technique for high dynamic range three-dimensional shape measurement [J]. Optics Express, 2016, 24(7): 7703. doi: 10.1364/OE.24.007703 [199] Lin H, Gao J, Mei Q, et al. Three-dimensional shape measurement technique for shiny surfaces by adaptive pixel-wise projection intensity adjustment [J]. Optics and Lasers in Engineering, 2017, 91: 206−215. doi: 10.1016/j.optlaseng.2016.11.015 [200] Chen S, Xia R, Zhao J, et al. Analysis and reduction of phase errors caused by nonuniform surface reflectivity in a phase-shifting measurement system [J]. Optical Engineering, 2017, 56(3): 033102. doi: 10.1117/1.OE.56.3.033102 [201] Babaie G, Abolbashari M, Farahi F. Dynamics range enhancement in digital fringe projection technique [J]. Precision Engineering, 2015, 39: 243−251. doi: 10.1016/j.precisioneng.2014.06.007 [202] Sheng H, Xu J, Zhang S. Dynamic projection theory for fringe projection profilometry [J]. Applied Optics, 2017, 56(30): 8452. doi: 10.1364/AO.56.008452 [203] Qi Z, Wang Z. Highlight removal based on the regional-projection fringe projection method [J]. Optical Engineering, 2018, 57(04): 1. [204] Ri S, Fujigaki M, Morimoto Y. Intensity range extension method for three-dimensional shape measurement in phase-measuring profilometry using a digital micromirror device camera [J]. Applied Optics, 2008, 47(29): 5400. doi: 10.1364/AO.47.005400 [205] Chen T, Lensch H P A, Fuchs C, et al. Polarization and phase-shifting for 3D scanning of translucent objects[C]// IEEE, 2007: 1–8. [206] Salahieh B, Chen Z, Rodriguez J J, et al. Multi-polarization fringe projection imaging for high dynamic range objects [J]. Optics Express, 2014, 22(8): 10064. doi: 10.1364/OE.22.010064 [207] Cai Z, Liu X, Peng X, et al. Structured light field 3D imaging[J]. Optics Express, 2016, 24(18): 20324-20334. [208] Feng S, Zhang Y, Chen Q, et al. General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique [J]. Optics and Lasers in Engineering, 2014, 59: 56−71. doi: 10.1016/j.optlaseng.2014.03.003 [209] Liu G, Liu X Y, Feng Q Y. 3D shape measurement of objects with high dynamic range of surface reflectivity [J]. Applied Optics, 2011, 50(23): 4557. doi: 10.1364/AO.50.004557 [210] Jiang H, Zhao H, Li X. High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces [J]. Optics and Lasers in Engineering, 2012, 50(10): 1484−1493. doi: 10.1016/j.optlaseng.2011.11.021 [211] Zhao H, Liang X, Diao X, et al. Rapid in-situ 3D measurement of shiny object based on fast and high dynamic range digital fringe projector [J]. Optics and Lasers in Engineering, 2014, 54: 170−174. doi: 10.1016/j.optlaseng.2013.08.002 [212] Yin Y, Cai Z, Jiang H, et al. High dynamic range imaging for fringe projection profilometry with single-shot raw data of the color camera [J]. Optics and Lasers in Engineering, 2017, 89: 138−144. doi: 10.1016/j.optlaseng.2016.08.019 [213] Jiang C, Bell T, Zhang S. High dynamic range real-time 3D shape measurement [J]. Optics Express, 2016, 24(7): 7337. doi: 10.1364/OE.24.007337 [214] Wang M, Du G, Zhou C, et al. Enhanced high dynamic range 3D shape measurement based on generalized phase-shifting algorithm [J]. Optics Communications, 2017, 385: 43−53. doi: 10.1016/j.optcom.2016.10.023 [215] Chen Y, He Y, Hu E. Phase deviation analysis and phase retrieval for partial intensity saturation in phase-shifting projected fringe profilometry [J]. Optics Communications, 2008, 281(11): 3087−3090. doi: 10.1016/j.optcom.2008.01.070 [216] Hu E, He Y, Chen Y. Study on a novel phase-recovering algorithm for partial intensity saturation in digital projection grating phase-shifting profilometry [J]. Optik-International Journal for Light and Electron Optics, 2010, 121(1): 23−28. doi: 10.1016/j.ijleo.2008.05.010 [217] Chen B, Zhang S. High-quality 3D shape measurement using saturated fringe patterns [J]. Optics and Lasers in Engineering, 2016, 87: 83−89. doi: 10.1016/j.optlaseng.2016.04.012 [218] Qi Z, Wang Z, Huang J, et al. Error of image saturation in the structured-light method [J]. Applied Optics, 2018, 57(1): A181−A188. doi: 10.1364/AO.57.00A181 [219] Zhang L, Chen Q, Zuo C, et al. High dynamic range 3D shape measurement based on time domain superposition [J]. Measurement Science and Technology, 2019, 30(6): 065004. [220] Feng S, Zhang L, Zuo C, et al. High dynamic range 3D measurements with fringe projection profilometry: a review [J]. Measurement Science and Technology, 2018, 29(12): 122001. doi: 10.1088/1361-6501/aae4fb [221] Feng S, Chen Q, Gu G, et al. Fringe pattern analysis using deep learning [J]. Advanced Photonics, 2019, 1(2): 025001. [222] Feng S, Zuo C, Yin W, et al. Micro deep learning profilometry for high-speed 3D surface imaging [J]. Optics and Lasers in Engineering, 2019, 121: 416−427. doi: 10.1016/j.optlaseng.2019.04.020 [223] Lei S, Zhang S. Flexible 3-D shape measurement using projector defocusing [J]. Optics Letters, 2009, 34(20): 3080−3082. doi: 10.1364/OL.34.003080 [224] Ayubi G A, Ayubi J A, Di Martino J M, et al. Pulse-width modulation in defocused three-dimensional fringe projection [J]. Optics Letters, 2010, 35(21): 3682−3684. doi: 10.1364/OL.35.003682 [225] Zuo C, Chen Q, Feng S, et al. Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing [J]. Applied Optics, 2012, 51(19): 4477−4490. doi: 10.1364/AO.51.004477 [226] Zuo C, Chen Q, Gu G, et al. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection [J]. Optics and Lasers in Engineering, 2013, 51(8): 953−960. doi: 10.1016/j.optlaseng.2013.02.012 [227] Wang Y, Zhang S. Superfast multifrequency phase-shifting technique with optimal pulse width modulation [J]. Optics Express, 2011, 19(6): 5149−5155. doi: 10.1364/OE.19.005149 [228] Wang Y, Zhang S. Three-dimensional shape measurement with binary dithered patterns [J]. Applied Optics, 2012, 51(27): 6631−6636. doi: 10.1364/AO.51.006631 [229] Dai J, Zhang S. Phase-optimized dithering technique for high-quality 3D shape measurement [J]. Optics and Lasers in Engineering, 2013, 51(6): 790−795. doi: 10.1016/j.optlaseng.2013.02.003 [230] Dai J, Li B, Zhang S. High-quality fringe pattern generation using binary pattern optimization through symmetry and periodicity [J]. Optics and Lasers in Engineering, 2014, 52: 195−200. doi: 10.1016/j.optlaseng.2013.06.010 [231] Sun J, Zuo C, Feng S, et al. Improved intensity-optimized dithering technique for 3D shape measurement [J]. Optics and Lasers in Engineering, 2015, 66: 158−164. doi: 10.1016/j.optlaseng.2014.09.008 [232] Dai J, Li B, Zhang S. Intensity-optimized dithering technique for three-dimensional shape measurement with projector defocusing [J]. Optics and Lasers in Engineering, 2014, 53: 79−85. doi: 10.1016/j.optlaseng.2013.08.015 [233] Zhang S, Van D W D, Oliver J. Superfast phase-shifting method for 3-D shape measurement [J]. Optics Express, 2010, 18(9): 9684. doi: 10.1364/OE.18.009684 [234] Gong Y, Zhang S. Ultrafast 3-D shape measurement with an off-the-shelf DLP projector [J]. Optics Express, 2010, 18(19): 19743−19754. doi: 10.1364/OE.18.019743 [235] Zuo C, Tao T, Feng S, et al. Micro Fourier Transform Profilometry (μ FTP): 3D shape measurement at 10, 000 frames per second [J]. Optics and Lasers in Engineering, 2018, 102: 70−91. doi: 10.1016/j.optlaseng.2017.10.013 [236] Zhang Q, Su X, Cao Y, et al. Optical 3-D shape and deformation measurement of rotating blades using stroboscopic structured illumination [J]. Optical Engineering, 2005, 44(11): 113601. doi: 10.1117/1.2127927 [237] Schaffer M, Grosse M, Harendt B, et al. High-speed optical 3-d measurements for shape representation [J]. Optics and Photonics News, 2011, 22(12): 49−49. doi: 10.1364/OPN.22.12.000049 [238] Schaffer M, Grosse M, Harendt B, et al. High-speed three-dimensional shape measurements of objects with laser speckles and acousto-optical deflection [J]. Optics Letters, 2011, 36(16): 3097−3099. doi: 10.1364/OL.36.003097 [239] Schaffer M, Grosse M, Harendt B, et al. Statistical patterns: an approach for high-speed and high-accuracy shape measurements [J]. Optical Engineering, 2014, 53(11): 112205. doi: 10.1117/1.OE.53.11.112205 [240] Grosse M, Schaffer M, Harendt B, et al. Fast data acquisition for three-dimensional shape measurement using fixed-pattern projection and temporal coding [J]. Optical Engineering, 2011, 50(10): 100503. doi: 10.1117/1.3646100 [241] Fujigaki M, Sakaguchi T, Murata Y. Development of a compact 3D shape measurement unit using the light-source-stepping method [J]. Optics and Lasers in Engineering, 2016, 85: 9−17. doi: 10.1016/j.optlaseng.2016.04.016 [242] Heist S, Mann A, Kühmstedt P, et al. Array projection of aperiodic sinusoidal fringes for high-speed three-dimensional shape measurement [J]. Optical Engineering, 2014, 53(11): 112208. doi: 10.1117/1.OE.53.11.112208 [243] Heist S, Lutzke P, Schmidt I, et al. High-speed three-dimensional shape measurement using GOBO projection [J]. Optics and Lasers in Engineering, 2016, 87: 90−96. doi: 10.1016/j.optlaseng.2016.02.017 [244] Heist S. 5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light [J]. Optics Express, 2018: 14. [245] Landmann M, Heist S, Dietrich P, et al. High-speed 3D thermography [J]. Optics and Lasers in Engineering, 2019, 121: 448−455. doi: 10.1016/j.optlaseng.2019.05.009 [246] Zhang M, Chen Q, Tao T, et al. Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection [J]. Optics Express, 2017, 25(17): 20381. doi: 10.1364/OE.25.020381
计量
- 文章访问数: 7601
- HTML全文浏览量: 2235
- 被引次数: 0