Articles in press have been peer-reviewed and accepted, which are not yet assigned to volumes /issues, but are citable by Digital Object Identifier (DOI).
Metal Water-Triple-Point Automatic Reproduction Control System for In-Situ Online Calibration of Temperature Sensors
Qiao Zhigang, Gao Dexin, Zhang Muzi, Zhao Shanshan, Wu Jiali, S Juu, Chen Shenggong, Jing Chao, Liu Hailing, Yang Bo, Wu Chi
Accepted Manuscript  doi: 10.3788/IRLA20240096
[Abstract](5) [FullText HTML](3) [PDF 599KB](1)
  Objective  The triple point of water refers to the state where water, ice, and vapor coexist simultaneously, with an equilibrium temperature of 273.16 K (0.01 ℃). In the International Temperature Scale, the triple point of water serves as the sole reference point for defining the thermodynamic temperature unit Kelvin, and it is one of the most important fixed points in ITS-90 [1-2]. The thermodynamic temperature reproduction of water's triple point is crucial for practical temperature measurements [3].The reproduction of water's triple point is achieved by freezing an ice mantle inside a triple point of water cell. Widely used in the ITS-90 guidelines are triple point of water cells with borosilicate glass or fused silica shells. Traditional reproduction methods include the ice-salt mixture cooling method, dry ice cooling method, and liquid nitrogen cooling method. These methods all require the cooling of the triple point of water cell using dry ice, liquid nitrogen, or other cryogenic media, followed by freezing the high-purity water inside the cell and then storing it in an ice bath. While these traditional methods offer high reproduction accuracy and good results, they are complex, operationally difficult, and demand high standards for operators and the environment, making them inconvenient for on-site calibration and integrated applications [2-3]. Addressing the limitations of traditional triple point of water cells and reproduction methods for in-situ applications, such as the on-site calibration of temperature sensors in the deep sea, this paper investigates a miniaturized triple point reproduction control system suitable for the automatic calibration of temperature sensors, based on a self-developed miniature metal water triple point cell.  Methods  This control system utilizes the principle of spontaneous phase transition of high-purity water in a metal water triple point container, combined with a thermoelectric cooler (TEC) based on the semiconductor Peltier effect and a temperature control circuit, to achieve the automatic reproduction and maintenance of the water triple point. Temperature phase transition monitoring is achieved through the use of thermistors and temperature detection circuits. By employing a dual thermistor setup and TEC in a closed-loop control, the system adjusts the driving power of the TEC based on the temperature difference detected by the feedback resistors, thereby realizing the automatic reproduction and maintenance of the water triple point.  Results and Discussions  Figures 1(a) and (b) respectively illustrate the control schematic of the automatic reproduction system for the metal water triple point bottle and a photograph of the actual metal water triple point bottle. The research employed a miniaturized metal water triple point bottle, utilizing the principle of spontaneous phase transition of high-purity water, along with a thermoelectric cooler (TEC) based on the semiconductor Peltier effect and a temperature control circuit, to achieve the reproduction and maintenance of the water triple point. High sensitivity thermistors combined with a temperature detection circuit were used for monitoring the phase transition of high-purity water. A closed-loop control consisting of dual thermistors and the TEC was utilized. Based on the temperature difference detected by the feedback resistors, the study investigated the cooling demand of the high-purity water phase transition and established a thermodynamic model for the triple point bottle cooling system. By appropriately adjusting the TEC's driving power, the state of the water triple point was reproduced and maintained for an extended period. The measurement results in Figure 2 indicate that, significant supercooling of the high-purity water inside the metal water triple point bottle was observed. It remained unfrozen at the liquid-solid phase equilibrium temperature (0 ℃) and suddenly underwent a phase transition when the temperature reached the transition temperature (approximately −7.3 ℃), causing a rapid increase in the internal trap temperature, which then stabilized, with a stability duration of 20 minutes and a temperature fluctuation of ±1mK. The analysis of the experiment demonstrates that the miniaturized triple point temperature automatic reproduction control system based on the metal water triple point bottle can achieve spontaneous phase transition of high-purity water and maintain a stable temperature plateau for a certain period, facilitating high-precision in-situ temperature calibration of temperature sensors.  Conclusions  This study indicates that combining the metal water triple point bottle with properly arranged temperature monitoring sensors, a TEC cooling system, and a refrigeration control circuit and algorithm can automatically reproduce and maintain the high-purity water triple point state for 20 minutes, with a temperature fluctuation of ±1mK. This provides an accurate, stable, and sustainable environment for in-situ calibration of temperature sensors, serving high-precision in-situ temperature calibration in deep-sea and deep-space environments.
Verification of demodulation method for differential optical doppler velocimetry data
Zhang Zhijun, Song Ran, Jiang Lili, Zhang Xinyu, Li Bingbing, Chen Shenggong, Su Juan, Wu Chi
Accepted Manuscript  doi: 10.3788/IRLA20240094
[Abstract](5) [FullText HTML](2) [PDF 720KB](1)
  Objective  In the field of physical oceanographic research, seawater flow velocity is one of the key parameters, primarily measured using acoustic Doppler velocimeters. In recent years, laser Doppler technology has made significant advancement in seawater flow velocity measurement. Laser Doppler velocimetry, with its simple and integrable structure, is expected to be a complementary technique with acoustic Doppler velocimeters in marine applications.Compared to acoustic velocity measurement techniques, laser Doppler velocimeters offer several advantages: their shorter wavelength (in the micron range) allows for the study of smaller-scale water features, and they can resist noise interference generated by underwater vehicles when used with unmanned underwater vehicles. However, due to seawater absorption and scattering, the detected signal is extremely weak and buried in strong noise, posing challenges for Doppler signal demodulation. Moreover, limited by the sampling frequency, there exists an error between the peak position of the obtained data spectrum and the true frequency. Therefore, effectively removing noise interference and improving measurement accuracy are crucial for laser Doppler velocimeters. In this paper, an adaptive filtering algorithm is employed to denoise the collected signal, followed by fast Fourier transform to enhance the signal-to-noise ratio. Three peak-finding algorithms are compared, and the Gaussian-LM algorithm is selected to process the power spectrum of the signal, bringing the peak position closer to the real peak value and thereby improving the demodulation accuracy of the Doppler signal and significantly reducing the error caused by noise.  Methods  The principle of laser Doppler velocimetry is illustrated in Figure 1(a). A laser beam is split into two equal beams by an optical fiber splitter after passing through a single-mode optical fiber. These two beams are then collimated into parallel beams by a collimator and directed onto a plano-convex lens at the end, which focuses the parallel beams onto a specific point outside the instrument, generating interference fringes at this focal point. When particles in the water pass through these interference fringes, they scatter light, which is collected by the plano-convex lens and converted into parallel light. This scattered light is then collected by an avalanche photodetector and converted into an electrical signal, which is acquired by an oscilloscope. The acquired signal undergoes algorithm processing to demodulate the flow velocity .Figure 1(b) is a field photo of the optical system prototype being tested in the Marine environment off Qingdao.The key to signal processing is accurately extracting the Doppler frequency shift from a large amount of noise, and the noise in the Doppler signal is non-stationary. Therefore, the least mean square error algorithm can be utilized to effectively denoise the Doppler signal. Fast Fourier transform shifts the focus of the research from the time domain to the frequency domain, where it is easier to analyze the regularity of the Doppler frequency. Further, the Gaussian-LM algorithm is employed to perform peak finding on the Doppler signal, obtaining accurate frequency information.  Results and Discussions  Through simulation, the optimal peak finding algorithm was selected. The Monte Carlo algorithm, Gaussian fitting algorithm, and Gaussian-LM algorithm were employed to perform peak finding on Gaussian signals with added noise, and their measurement accuracies were compared, as shown in Figure 2(a). Peak finding calculations were conducted on multiple datasets, and their standard deviations are illustrated in Figure 2(b). The results indicate that the Monte Carlo algorithm exhibited the lowest peak finding accuracy, while the Gaussian-LM algorithm demonstrated the highest accuracy. Moreover, the Gaussian-LM algorithm exhibited smaller standard deviation compared to other algorithms, with a lower fluctuation range, indicating greater stability. Therefore, the Gaussian-LM algorithm was chosen for peak finding in the Doppler signal.A comparative experiment on seawater velocity was conducted at the Zhongyuan Tourist Dock in Qingdao, China, using a home-made optical Doppler velocimetry (LDV) and an acoustic Doppler velocimeter (ADV model: SonTek, Argonaut-ADV). Algorithmic research was carried out on the obtained seawater velocity measurement data. Considering the different sampling rates of the two instruments, the data were first averaged over 30 minutes. From Figure 3(a), it can be observed that the data before algorithm processing roughly align with the trend of velocity values measured by ADV, but there are still discrepancies. However, the data after algorithm processing showed a higher degree of fitting with the data measured by ADV. Figure 3(b) illustrates the errors obtained by ADV for the data before and after processing, and calculates the average error. Through error analysis, it showed that the average error between the pre-processed LDV and ADV velocity measurements was 0.2905 cm/s, while the average error between the post-processed LDV and ADV velocity measurements was 0.2163 cm/s, indicating a reduction in error of 25.5%.  Conclusions  The signal of light scattering from suspended particles in seawater is extremely weak. Extracting signals submerged in noise and demodulating them to obtain velocity information poses a challenge for accurate measurements with laser Doppler velocimeters. In this paper, demodulation algorithms based on velocity data obtained from experiments in the near-shore of Qingdao was studied. Initially, through simulation and optimization, the Gaussian-LM algorithm was selected as the peak finding algorithm. Subsequently, signal denoising was performed based on the Least Mean Square (LMS) algorithm on the actual velocity data obtained during sea trials, combined with the Gaussian-LM algorithm for peak finding, achieving high-precision demodulation.Comparative experiments between home-made laser Doppler velocimeter and a well-known commercial acoustic Doppler velocimeter indicate that the post-processed velocity measurement error based on this algorithm is 0.21 cm/s, representing a 25.5% error reduction compared to pre-processing result.
Materials & Thin films
Image processing
Survey of research methods in infrared image dehazing
Tang Wenjuan, Dai Qun
2024, 53(2): 20230416.   doi: 10.3788/IRLA20230416
[Abstract](99) [FullText HTML](26) [PDF 1926KB](42)
  Significance   Infrared image dehazing refers to the process of restoring the contrast and visual quality of the infrared imaging system by removing the influence of haze, smoke and other media on the infrared image in the presence of atmospheric turbulence. Infrared images are widely used in military, security, medical, energy exploration and other fields by virtue of the advantages of all-day and no light limitation. Enhanced Image Visibility : Infrared images captured in hazy or foggy conditions often suffer from reduced visibility and degraded image quality. Dehazing techniques aim at improving the visibility of these images, allowing for better interpretation and analysis. Improved Object Detection and Recognition: Dehazing infrared images can enhance the performance of object detection and recognition algorithms. By removing the haze, important visual features of objects can be more clearly revealed, leading to more accurate and reliable results in various applications such as surveillance, target tracking, and autonomous vehicles. Enhanced Environmental Monitoring: Infrared imaging is widely used in environmental monitoring, including forest fire detection, air pollution monitoring, and thermal inspection of infrastructure. Dehazing techniques can help improve the accuracy and reliability of these monitoring systems by providing clearer and more detailed infrared images. Enhanced Human Perception: Dehazing infrared images can also benefit human observers by providing clearer and more understandable visual information. This is particularly important in applications where human operators rely on infrared images for decision-making, such as search and rescue operations, firefighting, and security surveillance. Advancements in Computer Vision Research: Dehazing infrared images presents a challenging problem in computer vision research. Developing effective dehazing algorithms for infrared images requires the exploration and development of novel techniques, such as image enhancement, deconvolution, and scene understanding. The research in this area can contribute to the advancement of overall computer vision research and benefit other related fields.   Progress   In recent years, with the continuous development of computer vision and deep learning technologies, significant progress has been made in infrared image dehazing techniques, providing support for the development of infrared image applications. According to the different types of data relied upon in the process of infrared image dehazing, existing methods can be divided into two categories: multi-information fusion and single-frame image processing. Image dehazing is a highly challenging task because the degradation level of an image is influenced by factors such as the concentration of suspended particles and the distance between the target and the detector. These pieces of information are difficult to directly obtain from the image, making image dehazing a very challenging task. Researchers have proposed multi-information fusion algorithms to assist in the restoration of infrared images by fusing additional information acquired through sensor fusion or multiple images. These methods mainly include polarization image dehazing (Fig.2) and fusion-weighted image dehazing methods. Single-frame image processing refers to the technique of digital or image processing applied to individual static images. In practical applications, single-frame image processing is often combined with machine learning, deep learning, and other technologies to achieve better results. This article mainly discusses image enhancement and image reconstruction in single-frame image processing. Image enhancement combines the MSR (Fig.5) with the CLAHE algorithm to achieve image enhancement of foggy images (Fig.3, Fig.4). Image reconstruction applied to the field of infrared image dehazing can estimate unknown information based on the characteristics of known information, which can be used to restore the degraded image quality caused by haze conditions. The main methods include: Dark Channel Prior, Super pixel and MRF (Fig.7), Atmospheric Light Estimation-based (Fig.8), Color Attenuation Prior-based (Fig.9), Detail Transmission Prior-based Image, Gradient Channel Prior-based Dehazing Algorithm. Overall, both multi-modal fusion and single-frame image processing approaches contribute to the advancement of infrared image dehazing techniques by leveraging different types of data and image processing algorithms.   Conclusions and Prospects   Infrared image dehazing technology will become more intelligent. Researchers are more inclined to use deep learning and convolutional neural network (CNN) techniques to achieve automated haze removal processing. In the future, infrared image dehazing technology is expected to be deeply integrated with other image processing techniques. Multi-modal fusion is a technique used to extract the most useful information from multiple data sources in order to improve the understanding and processing of image data, to enhance image quality and processing efficiency. To improve the accuracy of infrared image dehazing, it can be beneficial to incorporate visible light images or depth images.
Infrared image super-resolution based on spatially variant blur kernel calibration
Cao Junfeng, Ding Qinghai, Luo Haibo
2024, 53(2): 20230252.   doi: 10.3788/IRLA20230252
[Abstract](75) [FullText HTML](14) [PDF 4486KB](23)
  Objective  In recent years, infrared imaging systems have been increasingly used in industry, security, and remote sensing. However, the resolution of infrared devices is still quite limited due to its cost and manufacturing technology restrictions. To increase image resolution, deep learning-based single image super-resolution (SISR) has gained much interest and made significant progress in simulated images. However, when applied to real-world images, most approaches suffer a performance drop, such as over-sharpening or over-smoothing. The main reason is that these methods assume that blur kernels are spatially invariant across the whole image. But such an assumption is rarely applicable for infrared images, whose blur kernels are usually spatially variant due to factors such as lens aberrations and thermal defocus. To address this issue, a blur kernel calibration method is proposed to estimate spatially-variant blur kernels, and a patch-based super-resolution (SR) algorithm is designed to reconstruct super-resolution images.   Methods  Parallel light tube and motorized rotating platform are used to establish target image acquisition environment, and then images of multi-circle target at different positions are gathered (Fig.1). Based on sub-pixel accurate circle center detection, the camera pose parameters are solved, and high-resolution target images are synthesized according to the parameters. High-resolution and low-resolution target image pairs are fed into the blur kernel estimation network to obtain accurate blur kernels (Fig.3). In addition, a patch-based super-resolution algorithm is designed, which decomposes the test image into overlapping patches, reconstructs each of them separately using estimated kernels, and finally merges them according to Euclidean distances (Fig.4).   Results and Discussions   The experimental results show that the blur caused by the optical system is not negligible and varies slowly with spatial position (Fig.6). The proposed method, which calibrates blur kernels in a laboratory setting, can obtain a more accurate blur kernel estimation result. As a consequence, the proposed patch-based super-resolution algorithm can produce more visually pleasant results with more reliable details (Fig.7-8), and can also boost objective quality evaluation indicators such as natural image quality evaluator (NIQE), perception based image quality evaluator (PIQE), and blind/referenceless image spatial quality evaluator (BRISQUE) (Tab.1). SR experiments on 4-bar targets with different spatial frequencies show that the proposed method can distinguish the target with spatial frequency of 3.57 cycles/mrad, while comparison methods can just distinguish that of 3.05 cycles/mrad under the same conditions (Fig.9).   Conclusions  A blur kernel calibration method is proposed to estimate spatially-variant blur kernels, and a patch-based super-resolution algorithm is designed to implement super-resolution reconstruction. The experimental results show that image blur caused by the optical system changes slowly with the spatial position. As a result, one blur kernel can be estimated for each image patch, instead of densely estimated for each pixel, thereby reducing the complexity of calibration and memory consumption during reconstruction. Thanks to the accurate blur kernel estimation, the proposed super-resolution algorithm outperforms the comparison methods in both qualitative and quantitative results. Furthermore, the blur kernel calibration method is easy to implement in engineering applications. For any infrared camera, only dozens of multi-circle target images covering all areas of the focal plane are needed to complete the calibration process. When real-time performance is required, the proposed blur kernel calibration method can also be combined with other lightweight non-blind super-resolution methods to achieve a real-time performance. In the future, the problem of image blur caused by thermal defocusing will be studied to expand the scope of the method.
Two-step random phase-shifting algorithm based on principal component analysis and VU decomposition method
Zhang Yu
2024, 53(2): 20230596.   doi: 10.3788/IRLA20230596
[Abstract](65) [FullText HTML](15) [PDF 2149KB](13)
  Objective  The level of optical metrology determines the level of optical manufacturing technology, and the phase-shifting interferometry (PSI) as an easy, high-speed and accurate optical testing tool is usually used during or after optical fabrication. Both accuracy and efficiency are important to PSI. Outstanding phase-shifting algorithms (PSAs) can reduce the requirements for the interferometer hardware and environment, and further improve the accuracy and speed of PSI. Traditional PSAs with known phase shifts are easily affected by the miscalibration of piezo-transducer and environmental errors. In order to save time, many single-step PSAs were developed. Nevertheless, the sign of phase is difficult to judge by only one interferogram. In some high-precision events, accurate phase reconstruction is of interest. Hence, the multi-step PSAs with more than three interferograms were developed. However, it's difficult to reconstruct the phase with high accuracy and efficiency simultaneously. Comparatively, two-step random PSAs can avoid the effect of phase shift error, solve the sign ambiguity problem of the single-step PSAs, and balance the accuracy and speed. However, general two-step random PSAs need pre-filtering or use some complex methods to calculate background, these methods will cost more time. To balance the computational time and accuracy, a fast and high-precision two-step random phase-shifting algorithm based on principal component analysis and VU decomposition method is proposed in this paper.   Methods  A two-step random phase-shifting algorithm based on principal component analysis and VU decomposition method is proposed in this paper. Firstly, two-step principal component analysis method is used to calculate the initial phase of iteration by two filtered phase-shifting interferograms, and then VU decomposition and iteration of two unfiltered phase-shifting interferograms are used to calculate the final phase. Finally, the proposed method is compared with four good two-step random phase-shifting algorithms for different fringe types, noise, phase shift values and fringe numbers to verify its superior performance in the computational time and accuracy.   Results and Discussions   Compared with four good two-step random phase-shifting algorithms, the proposed method has the best comprehensive performance for different fringe types, noise, phase shift values and fringe numbers. The proposed method has the highest accuracy. Meanwhile, its effective phase shift range and fringe number range are the largest. When the size of interferograms is 401 pixel×401 pixel, the proposed method takes only 0.035 s more than Gram-Schmidt orthonormalization algorithm and two-step principal component analysis method. Under ideal conditions, the proposed method can get exactly correct result. If high precision is required, it is best to suppress the noise in advance, while setting the phase shift value away from 0 and π, and the fringe number greater than 2.   Conclusions  In order to balance the accuracy and speed of phase calculation, a fast and high-precision two-step random phase-shifting algorithm based on principal component analysis and VU decomposition method is proposed in this paper. The method is characterized by high accuracy, high speed and no filtering. It takes approximately the time of non-iterative algorithm to obtain the accuracy of iterative algorithm, and breaks the limit that iterative algorithm costs more time. It is suitable for high-precision optical in-situ measurement and has wide development future.
Ocean optics
Propagation properties of the vortex beam in the slant path of ocean turbulence under weak wind model
Wu Pengfei, Li Chengyu, Lei Sichen, Tan Zhenkun, Wang Jiao
2024, 53(2): 20230441.   doi: 10.3788/IRLA20230441
[Abstract](65) [FullText HTML](20) [PDF 5025KB](17)
  Objective  In recent years, with the development of underwater laser communication, laser imaging, lidar and other technologies, many scholars have carried out extensive research on the propagation of beams in ocean turbulence. Beams propagation in ocean medium is greatly affected by ocean turbulence, and the orbital angular momentum multiplexing of the vortex beam greatly increases the system capacity, thus it is of great significance to investigate the propagation of the vortex beam in ocean turbulence. Most of the previous studies have focused on the propagation of beams through horizontal ocean turbulence. However, the beam is mostly propagated through ocean turbulence in the slant path in practical applications.   Methods  Based on the theory of horizontal ocean turbulence, the phase screen of ocean turbulence in the slant path is generated and compensated, the correctness of ocean turbulence phase screen in the slant path is demonstrated by phase structure function. The uplink propagation link model of collimated Gaussian vortex beam in ocean turbulence is built based on the multi-phase screen method. The intensity and phase profiles, beam wander, on-axis scintillation index and long-exposure beam radius of the collimated Gaussian vortex beam in the slant path of ocean turbulence for the values of the different zenith angle, the inner scale and the outer scale of oceanic turbulence, topological charge and other ocean turbulence parameters are numerically simulated and analyzed.   Results and Discussions   Two-dimensional diagram of random ocean turbulence phase screen (Fig.2(a)), and the correctness of ocean turbulence phase screen in slant path is demonstrated by phase structure function (Fig.2(b)); The beam wander of collimated Gaussian vortex beam versus the propagation distance for the values of different tidal velocity of depth-averaged is simulated (Fig.6(b)); The beam wander of the collimated Gaussian vortex beam versus the propagation distance for the values of different wind speed, the zenith angle and the outer scale of oceanic turbulence is simulated (Fig.7). The on-axis scintillation index of the collimated Gaussian vortex beam versus the propagation distance for the values of different inner scale and the outer scale of oceanic turbulence is simulated (Fig.8(b)).   Conclusions  The correctness of ocean turbulence phase screen in slant path is demonstrated by phase structure function. The uplink propagation link model of ocean turbulence is simulated by multi-phase screen method. The results show that the smaller the topological charges and the larger the inner scale and the outer scale of oceanic turbulence the vortex beam, the greater the influence of turbulence on the beam is; The beam wander, the on-axis scintillation index and the long-exposure beam radius of collimated Gaussian vortex beam increase with the increase of the outer scale of ocean turbulence. Ideally, the outer scale of ocean turbulence is taken as infinity to overestimate the effect of ocean turbulence on the beam. The beam wander and the on-axis scintillation index are mainly affected by the propagation distance in the uplink propagation of ocean turbulence; Moreover, because of the characteristics of the vortex beam, the topological charges and the long-exposure beam radius have significant effects on the intensity and phase profiles and the long-exposure beam radius.
Infrared technology and application
Numerical and experimental research on the effect of outlet structural parameters of diverter nozzle on infrared suppressor performance
Du Jiadong, Shan Yong, Zhang Jingzhou
2024, 53(2): 20230459.   doi: 10.3788/IRLA20230459
[Abstract](38) [FullText HTML](10) [PDF 4684KB](20)
  Objective  With the rapid development of advanced infrared detection technology and infrared tracking and striking technology, armed helicopters are increasingly threatened by infrared guided missiles from ground and air in the modern high-tech battlefield. In order to improve the battlefield survivability and combat assault capability of armed helicopters, advanced infrared stealth technology must be developed. The research shows that the use of shielding technology and the improvement of the ejector capacity of the suppressor have a significant effect on reducing the infrared radiation intensity of the exhaust system, but the specific technical means should depend on the structure of the infrared suppressor. For the diverter nozzle ejector infrared suppressor, limited to the size and shape of the helicopter, it is difficult to improve the ejector capacity of the diverter nozzle and reduce the exhaust and wall temperature in a limited space. Therefore, it is necessary to discuss the modification scheme of the diverter nozzle outlet to reduce the infrared radiation intensity of the diverter nozzle ejector infrared suppressor.   Methods  A physical model was established including diverter nozzle, gas-collecting chamber, ejected gas inlet, curved mixing tube, covering shelter, and outer cover (Fig.1). The structured and unstructured hybrid grids were established, and the infrared radiation of the infrared suppressor was calculated by the forward-backward ray-tracing method. The calculation method is verified by experimental data (Tab.1-2,Fig.9). By comparing the pumping coefficient, total pressure recovery coefficient, outlet and wall temperature distribution of the mixing tube and infrared radiation intensity of the diverter nozzle ejector infrared suppressor (Fig.10-14), the effect of outlet structural parameters of diverter nozzle on infrared suppressor performance is analyzed from multiple perspectives.   Results and Discussions   The experimental data are used to verify the calculation method. The pumping coefficient and total pressure recovery coefficient of the infrared suppressor under different diverter nozzle outlet structures are compared and analyzed (Fig.10). The exhaust temperature distribution of the mixing tube outlet plane of the infrared suppressor under different diverter nozzle outlet structures is shown (Fig.11). The temperature distribution of the outer mixing tube wall surface of the infrared suppressor under different diverter nozzle outlet structures is shown (Fig.12). And the infrared radiation intensity distribution of the infrared suppressor with different diverter nozzle outlet configurations on the horizontal and lead hammer detection surfaces in the 3-5 μm band (Fig.13) and 8-14 μm band is shown (Fig.14).   Conclusions  Compared with the original model, the pumping coefficient of the Lobe_1 with a certain expansion angle is slightly reduced, the total pressure recovery coefficient of the Lobe_1 is reduced, and the peak exhaust temperature at the outlet of the intermediate mixing tube is reduced by 65.1 K. For Lobe_1, the wall temperature in the upper and lower areas of the mixing tube is reduced, but the temperature in the local area of the outer wall of the middle and rear sections of the mixing tube is increased. The lobed outlet structural (Lobe_2) with an outer expansion angle of 0 increases the pumping coefficient by 3.8%. The total pressure recovery coefficient is basically the same as that of the Lobe_1 model, and the peak exhaust temperature of the intermediate mixing tube is also reduced by 62.8 K, especially the effect of reducing the wall surface temperature of the mixing tube is the best. The outlet of the diverter nozzle with tab structure increases the pumping coefficient by 10.6%, but the total pressure recovery coefficient decreases 0.7%, and the average exhaust temperature of the inner mixing tube decreases by 19.3 K. Tab model has a poor effect on cooling the wall temperature of the mixing tube. In general, both lobe and tab structures play a role in ejection and mixing. In particular, the lobed outlet structure (Lobe_2) has the best effect on reducing the overall infrared radiation of the suppressor, and the infrared radiation intensity in the 3-5 μm band can be reduced by up to 21%, in the 8-14 μm band, the infrared radiation intensity can be reduced by 15%.
Silicon based near-infrared absorption enhancement structure with gradient doping of nano metal particles
Sun Yujia, Chen Fangzhou, Li Xiaozhi
2024, 53(2): 20230519.   doi: 10.3788/IRLA20230519
[Abstract](40) [FullText HTML](14) [PDF 1946KB](15)
  Objective  Silicon based optoelectronics are compatible with CMOS technology, and with the help of mature microelectronic processing platforms, large-scale mass production can be achieved. It has the advantages of low cost, high integration, and high reliability. Among them, the application of silicon based semiconductor detectors in the visible light band has become more mature. However, the commonly used semiconductor materials for near-infrared band detectors have drawbacks such as difficulties in compatibility with existing CMOS processes and high prices. Therefore, expanding the operating frequency range of silicon based semiconductor detectors to the near-infrared band is of great significance. Due to the bandgap width of silicon, there are significant limitations in the absorption of electromagnetic waves by silicon based materials in the near-infrared band, posing serious challenges for the application of silicon based detectors in the near-infrared band.  Methods  In order to break through the bandgap width limitation of silicon and improve the absorption performance of silicon materials in the near-infrared band, a silicon based structure based on gradient doping of nanometallic particles was proposed, based on the near-field enhancement effect generated by local surface plasmon resonance of nanometallic particles. The slow change in doping concentration can effectively solve the severe change in reflectivity caused by refractive index mutation. By applying the Maxwell Garnett equivalent medium theory, the absorption characteristics of composite silicon based structures in the visible and near-infrared bands were simulated, and the effects of two doping concentration changes and two doping metals on the absorption enhancement effect of silicon based materials were compared.  Results and Discussions   The results indicate that the structure has a significant improvement in electromagnetic wave absorption in the near-infrared band. When the doped metal is silver, both decreasing and increasing doping can bring absorption improvement in the 640-1080 nm wavelength range. However, increasing doping can avoid the drastic change in reflectivity caused by refractive index mutations, and its effect is significantly better than decreasing doping(Fig.6). When comparing the effects of different metals, the absorption enhancement band brought by the doping of gold nanoparticles is wider than that of silver nanoparticles. So when choosing gradient increasing doping of gold nanoparticles, the effect is optimal, and the absorption performance can be improved in the 610-1450 nm wavelength range, with a maximum improvement of 10.7 dB(Fig.7).  Conclusions  A silicon based structure that can break through the bandgap width limitation of silicon was proposed, near-infrared absorption enhancement was achieved, and the absorption enhancement effect under different conditions was simulated and analyzed. The proposed structure can effectively enhance the absorption efficiency of silicon based materials in the near-infrared band, which helps to improve the performance of silicon based devices. And by comparing different doping methods and metal selection, it is concluded that gradient increasing doping of gold nanoparticles is the optimal choice. The research results of this article provide important references for the application of silicon based semiconductor detectors in the near-infrared band.
Optical devices
Research on miniature three-axis vibration sensor based on FBG
Tang Xiang, Wu Jun, Li Qihui, Xin Jingtao, Dong Mingli
2024, 53(2): 20230518.   doi: 10.3788/IRLA20230518
[Abstract](29) [FullText HTML](9) [PDF 3863KB](15)
  Objective  Vibration measurement plays an essential role in machinery fault diagnosis and structural health monitoring, and vibration sensors are the most important tool in measuring equipment. Electrical vibration sensor technology is relatively mature, with the benefits of low cost, but there are drawbacks such as poor circuit stability, poor signal noise, and easy electromagnetic interference. In contrast, fiber Bragg grating vibration sensor has numerous advantages such as anti-electromagnetic interference, high and low temperature resistance, corrosion resistance, and so on, and is widely used in aerospace, large-scale structure monitoring, industrial propulsion, etc.   Methods  The high-density tantalum block serves as the mass block, the nickel-titanium alloy serves as the elastic beam, and the ultra-short fiber grating serves as the sensitive element in the vibration sensor. The mass block fixed to the shrapnel will reciprocate as the sensor vibrates due to external forces. This reciprocating motion will stretch the fiber grating and cause axial strain, which causes the center wavelength to wander. The shift in the center wavelength can be used to track the vibration. The vibration sensor's packaging is finished, and the amplitude-frequency and sensitivity characteristics of the sensor are carefully investigated by developing the necessary packaging platform and test equipment.   Results and Discussions   The sensor has a broad frequency spectrum, high sensitivity and excellent lateral anti-interference performance. The frequency range for operation is 0 to 1 200 Hz. The characteristic frequencies are 1 850 Hz, 1 770 Hz and 1 860 Hz in the X, Y and Z directions, respectively. The sensitivities in the three directions are 77.37 pm/g, 80.73 pm/g, and 75.04 pm/g respectively, and lateral anti-interference is less than 5%.   Conclusions  This article successfully designs an ultra-compact three-axis vibration sensor. The packaged sensor has considerable application possibilities in the satellite micro-vibration measurement due to its advantages of light weight, a wide operating frequency band, and high sensitivity.
Photoelectric measurement
Monocular spatial attitude measurement method guided by two dimensional active pose
Liu Feng, Guo Yinghua, Wang Lin, Gao Peipei, Zhang Yuetong
2024, 53(2): 20211026.   doi: 10.3788/IRLA20211026
[Abstract](902) [FullText HTML](222) [PDF 2175KB](208)
  Objective  Monocular vision measurement technology has the advantages of simple structure, low cost, convenient and flexible operation, and there are two types of monocular vision measurement technology in general. One is the combination of monocular camera and measured object, but it needs to design a suitable cooperative target, which has certain limitations. The other is the combination of monocular camera and active sensor, but the adjustment or calibration process of the pose relationship between the camera and the active sensor is more complicated. Aiming at how to quickly measure the pose of space objects, this paper studies a monocular visual spatial pose measurement method based on two-dimensional active pose guidance. This method only requires a camera and a precision two-dimensional carrier to collect one image before and after the carrier rotates, which can complete the rapid attitude measurement of space objects. The attitude measurement method has the advantages of low cost, simple operation and large measuring range, which is less dependent on equipment.  Methods  A monocular attitude measurement system composed of monocular camera, precision two-dimensional platform and measured object is established. And an attitude measurement model of monocular camera, precision two-dimensional platform and tilt meter is designed. A precision checkerboard image and two angles of the two-dimensional platform under different image positions were taken by the camera for multiple times to carry out joint visual calibration of the camera and the two-dimensional platform (Fig.2). The pose relationship between the camera and the platform was obtained, and the pose relationship between the checkerboard and the initial camera coordinate system was calculated. Based on the coordinate system of the geodetic inclinometer, the pose relationship between the inclinometer and the attitude measuring system was calibrated according to the coordinate system relationship between the inclinometer and the checker (Fig.3), and the measured values were converted to the coordinate system of the inclinometer, realizing the rapid measurement of monocular vision.  Results and Discussions  A monocular visual spatial pose measurement method based on 2D active pose guidance is studied. Through the acquisition of precision checkerboard images for many times, the pose relationship between the camera and the two-dimensional platform, the inclinometer and the measuring attitude system was obtained, and the calibration errors of pitch angle and roll angle were both < 0.31° (Fig.7). Taking the checkerboard as the measured object, combined with the calibrated parameters, the measurement error is the largest when the pitch angle is about 15°, and the measurement error is 0.82°. When the roll angle is about −15°, the maximum measurement error is −0.43° (Fig.10).  Conclusions  In this paper, a monocular visual spatial pose measurement method based on 2D active pose guidance is studied, and the attitude measurement model of monocular camera, precision 2D pedestal and inclinometer is established. This method uses only one camera, and does not need to consider the baseline distance under binocular setting. Moreover, this method can realize the rapid measurement of the object's attitude after calibration, and realize the measurement of the object's attitude under fixed-axis dual-angle photography. The experimental results show that the proposed method can be used to measure the attitude of space objects quickly.
Displacement measurement error of grating interferometer based on vector diffraction theory
Lei Lihua, Zhang Yujie, Fu Yunxia
2024, 53(2): 20230536.   doi: 10.3788/IRLA20230536
[Abstract](52) [FullText HTML](13) [PDF 1629KB](15)
  Objective  The grating interferometer displacement measurement system, as one of the most precise measuring instruments in the measurement field, is not able to ensure the perfect assembly of the grating's attitude position during the measurement process, which makes the deviation between the grating's grating vector direction and the motion vector direction, leading to the periodic nonlinear error in the displacement measurement results. In previous studies, the assembly error of grating displacement measurement system is usually unfolded in the scalar state, ignoring the effect of incident azimuth angle on the system. Based on the grating vector diffraction theory, this paper analyses the attitude position error between the grating and the displacement stage as well as the readhead that occurs during the displacement measurement of the grating interferometer, and illustrates the possible displacement measurement error by analysing the amount of angular deviation of the three dimensions, so as to provide the theoretical basis for the improvement of the subsequent device.  Methods  Ideally, the displacement measurement of a grating interferometer is based on the period of the core component, the grating. But due to the non-ideal assembly of the grating, the displacement stage, the readhead, the optics, and other system modules, there will be geometric errors in the system. The non-ideal assembly of the grating and the displacement stage, as well as the non-ideal assembly of the grating and the readhead, are the main factors leading to the geometrical error of the system. In this paper, we analyse the attitude position error between grating, displacement stage and readhead which occurs during the displacement measurement of grating interferometer, by establishing displacement coordinate system OXYZ and grating coordinate system OX'Y'Z', and referring to the attitude representation method of aircraft in the field of inertial navigation. We set the roll, pitch and yaw angles of one-dimensional grating to be α, β and γ respectively, which are common in describing the assembly state of the 1D grating relative to the translation stage. By analysing the amount of angular deviation in the three dimensions based on the grating vector diffraction theory, the possible displacement measurement errors are analysed and illustrated.  Results and Discussions  The results of the analyses of the grating assembly errors show that the geometrical errors caused by the non-ideal assembly of the metrology grating are mainly due to the rotational error angles β and γ around the Y' and Z' axes, while the rotation of the grating around the X' axes does not cause any additional measurement errors. It can be seen from the error expressions that error angles β and γ have the same effect on the measurement error. When analysing the readhead assembly error, it was found that the biggest difference between this and the encoder assembly error is that the readhead assembly error causes the system not to satisfy the Littrow structure, further complicating the problem. However, because of this, it explores a more general conclusion based on the generalised one-dimensional grating equations in this paper, from which the relationship between the systematic measurement error and the three error angles α, β and γ and the angles θ1, θ2, Ψ1, Ψ2 affecting the relative states of the incident P-light and Q-light is discussed respectively.  Conclusions  This paper analyses the measurement errors caused by clamping problems when using the grating displacement measurement system from two aspects of grating assembly errors and readhead assembly errors, and provides an analytical description of the possible displacement measurement errors. In the grating interferometer displacement measurement system, the analysis based on the grating vector diffraction theory shows that, when the Littrow incidence structure is satisfied, among the roll angle, pitch angle and yaw angle, the roll angle has no effect on the measurement results, and the expressions for the effects of pitch angle and yaw angle on the displacement measurement results are derived; When the Littrow incidence structure is not satisfied, the oblique incidence of laser light increases the azimuth angle. When the Littrow incidence structure is not satisfied, the laser will increase the azimuth angle after oblique incidence. According to the generalised one-dimensional grating equation, a more general conclusion in the presence of azimuth angle is deduced, which provides a theoretical basis for the subsequent improvement of the device.
Accuracy analysis of a three-dimensional angle measurement sensor based on dual PSDs
Zhao Wenhe, Bai Yangyang, Wang Jinkai, Zhang Lizhong
2024, 53(2): 20230543.   doi: 10.3788/IRLA20230543
[Abstract](92) [FullText HTML](17) [PDF 3837KB](22)
  Objective  In systems such as airborne photoelectric turntables and multi degree of freedom swing tables, three-dimensional angle measurement is often required. The methods of angle measurement are divided into contact measurement and non-contact measurement, and different measurement methods need to be selected based on actual application scenarios. For some flexible supports, parallel support platforms, and uncertain rotation axes between the moving object and the base, non-contact measurement methods need to be considered. At present, the common non-contact three-axis angle measurement schemes are complex and occupy a large space, which cannot meet the volume and weight requirements of airborne and spaceborne payload. Therefore, it is necessary to develop a non-contact three-axis angle measurement method with a simple structure and small footprint to meet the needs of different usage environments. Therefore, a non-contact three-dimensional angle measurement system based on two position sensitive detectors (PSD) has been proposed.   Methods  A three-axis angle measurement system based on dual PSD has been established. The system mainly consists of two parts of an autocollimation measurement unit and a double-sided reflection wedge (Fig.2). The autocollimation measurement unit includes a light source, PSD1, PSD2, autocollimation lens, and subsequent processing circuits. The light beam emitted by the light source converges into parallel light through a collimating lens. And PSD1 and PSD2 receive the light spot converged by the reflected beam and perform signal processing calculations through a processing circuit. The double-sided reflective wedge is designed with a semi-reflective and semi-transparent front surface and a fully reflective rear surface. Its function is to disperse the incident collimated parallel light into two beams and reflect them back into the self-collimating lens, which converges onto the target surfaces of two PSDs to form a light spot. According to the principle of angle measurement, the calibration method of two PSDs is designed to compensate for welding errors, and the FIR filtering algorithm is used to filter the simulated collected signal to improve accuracy.   Results and Discussions   A three-axis angle measurement system based on dual PSD has been designed, and a calibration experimental system (Fig.5) has been established to calibrate the relative position relationship between two PSDs. The welding error of the relative positions of the two PSDs is compensated through the rotation matrix and translation matrix, and the compensation result is great. A 34th-order FIR filter was designed and simulated, and the experimental results show that the designed filter has a good filtering effect on the actual collected noise signals. The filter is applied to the actual processing MCU for experiments, and the phase frequency response characteristics of the selected filter are analyzed. The test results show that the response bandwidth of the filter is 1.31 kHz, which can effectively filter out high-frequency noise signals in the analog voltage signal. The angle measurement experimental system (Fig.13) has been established, and the three-axis angle measurement function of the system has been verified. The system also has high accuracy.   Conclusions  A non-contact three-axis angle measurement system based on dual PSD is designed. This system has advantages such as simple structure, small size, high accuracy, large measurement range, high bandwidth, non-contact, and insensitivity to axial translation. The rotation matrix, translation matrix, and the designed 34th-order FIR filter obtained from calibrating two PSDs are coded and written into the STM32F4 series microcontroller, and the filter delay is approximately 525 μs, which is within an acceptable range. The processing circuit and selected devices designed according to the actual requirements of the project have been experimentally verified. Within a measurement range of ± 2°, the accuracy of yaw angle measurement reaches 0.006°, pitch angle measurement accuracy reaches 0.009°, and roll angle measurement accuracy reaches 0.021°. The overall autocollimation measurement unit weighs 230 g and has a size of 50 mm × 50 mm × 50 mm square box. The response frequency of the measurement system can reach 1.15 kHz. This system can measure the three-axis angle in real-time at high speed with high accuracy and small volume, and is suitable for various engineering applications, providing stable and high-speed three-axis angle measurement solutions for airborne, spaceborne, and other conditions.
Calibration method of fisheye camera for high-precision collimation measurement
Xiong Kun, He Xuran, Wang Chunxi, Li Jiabin, Yang Changhao
2024, 53(2): 20230549.   doi: 10.3788/IRLA20230549
[Abstract](49) [FullText HTML](14) [PDF 1793KB](17)
  Objective  Collimation measurement is one of the most widely used precision angle measurement and attitude measurement methods. By imaging the known reference target at infinity, the accurate angular relationship between the measured object and the reference target can be obtained. The measurement results have the advantages of high accuracy and high repeatability. Photoelectric autocollimator, electronic total station, theodolite and other measuring and calibration instruments all take collimation measurement as their main measurement principle. Due to the limitation of the calibration accuracy of the large-field-of-view and high-distortion optical system, the camera field of view used in precision collimation measurement is usually small, so there are great limitations in the application of large range angle measurement. Fisheye camera has the advantages of large field of view, small volume and light weight, so it should have a broad development prospect in the field of measurement and calibration. However, due to the large field of view and large distortion of the fisheye camera, there is a complex nonlinearity in the camera imaging process, and the asymmetry in lens processing has a more severe impact on the imaging model parameters. For this reason, a fisheye camera calibration method for high precision collimation measurement is proposed in this paper.   Methods  A two-step fisheye camera calibration method for collimation measurement is proposed in this paper, which includes radial rough calibration based on interpolation and fine calibration based on grid compensation. This method uses interpolation instead of constructing camera model, which can effectively avoid the system error caused by inaccurate model and unreasonable parameter setting, and can restrain the asymmetry of lens processing and the deviation caused by optical system adjustment to a certain extent. Different from the commonly used performance indicators such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), the mean reprojection error (MRE) selected in this paper can more effectively measure the camera calibration results under the condition of collimation measurement.   Results and Discussions   According to the classical model of fisheye camera, four different virtual fisheye cameras are constructed for simulation experiments (Tab.1). The simulation result shows that the calibration effect of this method on the four virtual fisheye camera models is better than the calibration method proposed recently (Fig.8), and the calibration uncertainty can be increased by 82.63% compared with the traditional method. Then, a fisheye camera calibration prototype based on embedded platform is designed (Tab.2). The calibration experimental results of the prototype shows that the proposed method can effectively calibrate the real fisheye camera for collimation measurement (Fig.10). After the calibration method is applied to the real prototype built in this paper, the uncertainty of the solution of the incident vector of the prototype can be raised to arcseconds (Fig.11).   Conclusions  A fisheye camera calibration method for high-precision collimation measurement is proposed. In the method, calibration process of fisheye camera for collimation measurement is divided into two parts of radial calibration and grid calibration. Firstly, two kinds of calibration sample points are collected with the help of high-precision turntable and collimator. Then the rough construction of the imaging model is completed by radial calibration. Finally, grid calibration is used to eliminate the error caused by the non-coincidence of rotation axis and optical axis in radial calibration, and further improve the calibration accuracy. Through simulation comparison experiments and prototype verification experiments, it is proved that this method has high calibration accuracy. Moreover, this method can be applied to the high-precision calibration of all kinds of real fisheye cameras for collimation measurement, and can provide technical support for the development of collimation measurement in the future.
Research on phase unwrapping technology based on improved U-Net network
Xu Ruishu, Luo Xiaonan, Shen Yaoqiong, Guo Chuangwei, Zhang Wentao, Guan Yuqing, Fu Yunxia, Lei Lihua
2024, 53(2): 20230564.   doi: 10.3788/IRLA20230564
[Abstract](53) [FullText HTML](18) [PDF 4321KB](23)
  Objective  Objective Phase Measurement Deflectometry (PMD) is widely employed in free-form surface transmission wavefront detection due to its simplicity, high accuracy, and broad detection range. Achieving high-precision phase acquisition is a critical step in the measurement and detection process. The phase unwrapping task, crucial in optics, plays a pivotal role in optical interferometry, magnetic resonance imaging, fringe projection profilometry (FPP), and other fields [1-4]. The challenge lies in recovering a continuously varying true phase signal from the observed wrapped phase signal within the range of [−π, π). While the ideal phase unrolling involves adding or subtracting 2π at each pixel based on the phase difference between adjacent pixels, practical applications face challenges such as noise and phase discontinuity, leading to poles in the wrapped phase [5]. These poles result in accumulated computational errors during the unwrapping process, causing phase unwrapping failures. Various methods are employed to unwrap and obtain the real phase distribution. To address these challenges, this paper proposes a phase unwrapping algorithm based on an improved U-Net network.  Methods  During the model training process, a composite loss function is defined to train the network based on the specific problem of spatial phase unwrapping. To address these challenges, this paper proposes a phase unwrapping algorithm based on an improved U-Net network. This algorithm utilizes U-Net as the basic network, integrates the CBiLSTM module for modeling time series, introduces an attention mechanism for enhanced generalization, and explores optimized loss functions. The proposed network model is validated through simulated and real datasets, showcasing its outstanding performance under noise, discontinuity, and aliasing conditions.The introduction of the attention mechanism enables better capture of global spatial relationships, while CBiLSTM effectively captures and stores long-term dependencies through memory unit structures. Memory units selectively remember and forget parts of the input signal information, enhancing their ability to handle long sequence data modeling tasks. The paper defines a composite loss function tailored to the spatial phase unwrapping problem during the model training process.Comparative experiments between the proposed network and classic models, such as U-Net [20], Res-UNet [21], and methods by Wang [13] and Perera et al. [19], demonstrate the robustness of the proposed network under severe noise and discontinuities. Additionally, it showcases computational efficiency in performing spatial phase unwrapping tasks.  Results and Discussions  Fig.10 shows the comparison between the predicted absolute phase and the real phase output by the wrapped phase after training the network model proposed in this article. Through the construction of the encoder-decoder model, the introduction of the CBiLSTM module and the attention mechanism module, and the composite The definition of the loss function, after comparing with other models, verifies the improvement in accuracy and reduction in training of the network model proposed in this article in the three situations mentioned above. Through simulation experiments and verification, by enhancing the deep learning model's ability to pay attention to key phase information, the network model proposed in this article can improve the accuracy and robustness of phase unwrapping, and promote further development in fields such as optical measurement and phase imaging.  Conclusions  This paper addresses the challenge of wrapped phase unwrapping by introducing a novel convolutional architecture framed as a regression problem. The proposed network incorporates several enhancements within the encoder-decoder framework, notably featuring a CBiLSTM module and a soft attention mechanism. Comparative analyses with existing phase unwrapping methods demonstrate the network's remarkable performance in achieving precise phase unwrapping, even in severe noise, discontinuities, and aliasing. Notably, the network showcases exceptional unwrapping capabilities without necessitating extensive training on large datasets. Moreover, it exhibits significantly reduced computational time, rendering it well-suited for tasks requiring accuracy and expeditious phase unwrapping.Validation experiments conducted on real laboratory datasets further affirm the outstanding performance of the proposed network. The introduced model empowers phase unwrapping tasks under challenging conditions, such as severe noise, discontinuities, and aliasing, surpassing the limitations of traditional methods. Comparative assessments with other deep learning models reveal a normalized root mean square error (NRMSE) as low as 0.75%. The advancement in unwrapped phase technology holds substantial significance for optical free-form surface detection, contributing to enhanced measurement accuracy, precise control of optical parameters, optimization of optical design, and quality assurance in optical manufacturing and detection processes.
Research on onboard radiation calibration scheme based on pixe-level adaptive gain imaging system
Li Ze, Wei Jun, Huang Xiaoxian, Tang Yuyu
2024, 53(2): 20230561.   doi: 10.3788/IRLA20230561
[Abstract](39) [FullText HTML](5) [PDF 4215KB](6)
Lasers & Laser optics
Optical design
Design method of beam shaping system for double free-form surfaces based on Virtual Surface Iteration method
Zhu Quanjin, Ma Haotong, Chen Bingxu, Xing Yingqi, Lin Junjie, Tan Yi
2024, 53(2): 20230587.   doi: 10.3788/IRLA20230587
[Abstract](56) [FullText HTML](9) [PDF 3567KB](13)
  Objective  The double free-form optical beam shaping system can adjust the spatial intensity distribution of the beam without altering its phase distribution. Still, it requires solving for the shape distribution of the double free-form optical surfaces by setting a virtual plane. This study reveals that applying the traditional single virtual surface method to design beam shaping systems with compact structures (short distance between double free-form optical elements) and large beam amplification (the ratio of the cutoff radius of the outgoing beam to the incident beam) ratios have some drawbacks, such as significant errors in solving for the double free-form optical surfaces and reduced shaping effectiveness, which generally includes the energy efficiency of the whole system for the beam and the irradiance uniformity of the beam after shaping.  Methods  This paper presents a method for designing a free-form optical beam shaping system based on a virtual surface iteration strategy and the concept of misalignment is proposed to evaluate the difference between the obtained second free-form surface and the virtual surface. The first step involves the creation of a virtual plane at the vertex of the second free-form surface and the virtual surface serves as the target surface of the beam exiting from the first free-form surface. Subsequently, all the discrete points on the first free-form surface can be obtained by using the virtual surface and Snell's law, in this case, all discrete points on the second free-form surface can be obtained by using Snell's law given the outgoing beam of the first free-form surface and the target surface of the whole beam shaping system. Finally, an iterative process updates the virtual surface to approximate the true shape of the second free-form surface.  Results and Discussions   The quantitative analysis examines the influence of the beam amplification ratio β and the axial distance D between the two optical elements on the misalignment between the virtual surface and the actual surface. A negative correlation is observed between β and misalignment, while a positive correlation exists between D and misalignment. Importantly, it is noticeable that with an increase in iterations, the value of misalignment rapidly approaches 0, thereby verifying the effectiveness of the Virtual Surface Iteration method. Two distinct beam shaping systems have been designed: a transmitted double free-form surfaces system and an off-axis two-mirror system. Simulation results demonstrate that both systems achieve over 95% irradiance uniformity (Tab.1) and more than 99% energy efficiency (Tab.1). Furthermore, employing the Single Virtual Surface method relatively enhances irradiance uniformity by 2.93% and energy efficiency by 8.930%.  Conclusions  This paper presents a design method for the beam shaping system of the double free-form surface based on a virtual surface iteration strategy. The proposed method employs ray tracing to calculate discrete points on the free-form surface. It utilizes the virtual surface iteration strategy to minimize the misalignment between the virtual and real surfaces. This approach ensures that the virtual surface continuously approaches the actual shape of the second free-form surface, thereby enhancing the coupling between the free-form surfaces in the beam shaping system. Additionally, this study analyzes the correlation between misalignment and parameters of the beam shaping systems, concluding that the misalignment is positively associated with both the beam amplification ratio and the compactness of the spatial structure of the system. Subsequently, simulation software is employed to design and simulate a coaxial transmission beam shaping system as well as an off-axis double-mirror beam shaping system. These simulations yield outgoing beams with ideal irradiance uniformity and energy efficiency. Compared to virtual plane methods, our proposed approach significantly improves shaping effects, thus validating its effectiveness in laser processing, medical treatment, optical information processing, and other fields requiring laser beam shaping systems.
Tolerance desensitization method based on principal component analysis and nodal aberration theory
Guan Zihan, Wang Min, Li Xiaotong
2024, 53(2): 20230590.   doi: 10.3788/IRLA20230590
[Abstract](68) [FullText HTML](20) [PDF 2130KB](17)
  Objective  Optical systems with low tolerance sensitivity have good machinability and high manufacturing yield, reducing processing and adjustment costs. To achieve this, it is necessary to evaluate and optimize the system's tolerance sensitivity in its design, so it is necessary to study related desensitization methods. Traditional desensitization methods include using global optimization algorithms, establishing multiple structures or a combination of both to conduct extensive searches in the solution space, which requires a large amount of computational resources and has low optimization efficiency. Besides, the method by controlling the system structure and ray tracing parameters (such as surface curvature, ray deflection angle, etc.) or optimizing specific aberration distributions to obtain tolerance-insensitive structures lacks the analysis of introduced aberrations, and there is still a certain blindness in the setting of evaluation functions, which also affects the optimization efficiency. Therefore, in order to achieve higher optimization efficiency, this paper proposes a tolerance desensitization method based on the analysis and control of introduced aberrations, and provides corresponding operation counts.  Methods  Zernike polynomials are used to quantify aberrations. Based on this, linear algebra theory and Monte Carlo analysis are used to find the aberration change rule of the system after introducing perturbations. The main introduced aberrations are then determined through the aberration field and eigenvalue distribution after dimension reduction (Fig.7, Fig.11). Asymmetric perturbations and axial perturbations that may occur during the system manufacturing process are modeled. The introduced aberrations caused by the perturbations are described based on the node aberration theory, and the key surfaces are determined through statistical analysis (Fig.8, Fig.12). According to the correspondence between Zernike terms and wave aberrations, the aberration space is transformed, and a corresponding evaluation function is proposed. Based on the previous analysis, the weights and application surfaces of each term of the evaluation function are determined, and then it is included in the optimization process to suppress the generation of new aberrations. The analysis and optimization ideas of this method are shown (Fig.3).  Results and Discussions   This method has been applied to the design of the F#11 optical system (Structure 1) and the NA0.5 optical system (Structure 2). After optimization, the expected machining performance has been significantly improved. Taking the MTF performance at the specified spatial frequency on the axis with a 98% confidence level as an example, the performance of the two systems after optimization has increased by about 68% (Fig.9) and 20% (Fig.13) respectively. Compared with the optimization using the TOLR operation number in Zemax software, the optimization time of Structure 1 has been reduced from 7 hours to 36 minutes, and tolerance desensitization has been successfully achieved in the optimization of Structure 2.  Conclusions  A method for reducing tolerance sensitivity based on the analysis and suppression of introduced aberrations is proposed. The obtained Zernike coefficient matrix is processed by the method of principal component analysis, and the dimensionality reduction of the aberration space is realized according to the obtained eigenvalues and their corresponding eigenvectors. After analyzing the dimensionally reduced aberration space, the main introduced aberration items after perturbation are clarified. The types of aberrations caused by asymmetric perturbations and axial perturbations in the optical system are analyzed, and the quantitative expression of the introduced aberration items is obtained based on the node aberration theory. According to the correspondence between Zernike terms and primary aberrations, the expression of the evaluation function M is derived. The evaluation function is applied to two design examples, and the optimization results show that this method has higher optimization efficiency compared to existing methods, and it has a tolerance desensitization effect on optical systems with different complexities and different introduced aberration characteristics.
Optical communication and sensing
Research and application of long-distance high-drop GIL multiparameter integrated monitoring
Zhang Zhaochuang, Chen Jianguo, Wei Xiaoying, Zhang Bao, Liu Yupeng, Wang Xichun, Chen Qingtao, Zhang Mengchen, Zhou Tao
2024, 53(2): 20230602.   doi: 10.3788/IRLA20230602
[Abstract](26) [FullText HTML](10) [PDF 2196KB](8)
  Objective  Compared with traditional detection/monitoring technology, the distributed optical fiber has absolute advantages compared with traditional method in terms of technical performance and practical application. It can detect the physical quantities such as temperature, displacement, vibration, and pressure, and has the advantages of high sensitivity, no electromagnetic radiation, large dynamic range, and wide adaptation range. In order to further strengthen the monitoring and operation of GIL and reduce the influence of a large number of sensing equipment on the structure of the equipment, the integrated monitoring and research of GIL shell vibration, temperature and strain is carried out. An optical cable realizes multi-parameter signal sensing, and the operation law of GIL is explored on the premise of reducing the number of sensing equipment.  Methods  Long-distance high-drop GIL multi-parameter integrated distributed optical fiber sensing and monitoring system uses multi-core optical cable as a sensing unit to collect and control data by computer to realize long-distance and large-range pipeline safety monitoring. The system mainly includes temperature and strain measuring unit, vibration measuring unit, multi-core optical cable, data acquisition center and control platform (Fig.3).  Results and Discussions  The vibration, temperature, strain and other multiple parameters of the gas insulated metal closed transmission line called GIL are collected through distributed optical fiber sensing technology, and the data are studied and applied for the collected data. Through vibration, strain, temperature monitoring test and data analysis, the system can accurately measure the vibration, strain and temperature data of optical fiber perception, and the data is accurate and reliable. The temperature measurement accuracy of the test system is ±0.5 ℃, temperature spatial resolution is ±0.6 m, positioning accuracy is ±0.3 m, system strain measurement accuracy is ±20 με, system vibration positioning accuracy is ±2 m and vibration positioning accuracy is ±2 m. Through the integrated monitoring study of GIL shell vibration, temperature, strain and other multiple parameters, the GIL operation rule is explored under the premise of reducing the number of sensing devices.  Conclusions  Distributed fiber sensing technology is used to carry out multi-parameter signal collection and monitoring of vibration, temperature and strain of GIL shell. Combined with the application environment and project objectives, on the basis of calculation method theory and application research, the reliability of the development of the monitoring system of the whole system is ensured, the performance index of the whole system is realized, the multi-parameter monitoring requirements for the long-distance and high-drop GIL pipeline is met. A single optical fiber can realize the sensing and measurement of multiple points throughout the GIL pipeline, and intuitively show the running status of GIL. In the future, the system will continue to go deep into the pipeline fault identification, improve the intelligence of the model, and provide technical support for the intelligent operation and maintenance of the GIL pipeline.
Lasers & Laser optics
Advances in 2 μm single-longitudinal-mode all-solid-state pulsed lasers (cover paper·invited)
Yan Bingzheng, Mu Xikui, An Jiashuo, Qi Yaoyao, Ding Jie, Bai Zhenxu, Wang Yulei, Lv Zhiwei
2024, 53(2): 20230730.   doi: 10.3788/IRLA20230730
[Abstract](196) [FullText HTML](50) [PDF 3312KB](85)
  Significance  The 2 µm single-longitudinal-mode (SLM) all-solid-state pulsed laser has attracted much attention for its applications in lidar, gas monitoring, laser medicine, material processing and scientific research, owing to its high stability, narrow spectral linewidth and other advantages. For instance, the 2 µm SLM laser features high atmospheric transmittance and eye-safety, making it an ideal emission source for Doppler wind lidar. Moreover, the 2 µm laser covers the absorption peaks of various gases such as H2O, CO2 and CH4, enabling it to be used as the emitter of differential absorption lidar for atmospheric greenhouse gas monitoring. By combining the 2 µm laser with other sensors, a comprehensive atmospheric environment monitoring system can also be established. In the field of material processing, the 2 µm laser can interact with many materials, greatly simplifying the processing steps. Furthermore, the 2 µm laser has diverse applications in medical surgery, such as tissue cutting, stone crushing and eye surgery. Through the characteristics of its working wavelength, the 2 µm laser can achieve precise tissue treatment, while reducing the damage to the surrounding tissue, offering a safer and more effective option for medical surgery. The 2 µm SLM all-solid-state pulsed laser also plays a vital role in the field of military defense.The 2 µm laser output can be obtained by using nonlinear frequency conversion or directly pumping gain medium doped with Tm3+ or Ho3+. However, the linewidth of the 2 µm laser output generated by nonlinear frequency conversion is relatively wide, so it is extremely difficult to achieve SLM laser output. In contrast, compared with the nonlinear frequency conversion technique using 1 µm lasers as the pump source of optical parametric oscillators, Tm3+ or Ho3+ doped Q-switched lasers typically involve using a special resonator design or introducing mode selection elements, which have more compact structure and higher stability in achieving a 2 µm SLM pulsed laser.With the significant development of laser technologies such as laser pump technology, single longitudinal mode selection technology, and high energy laser pulse technology, the 2 µm SLM all-solid-state pulsed laser is developing towards smaller size, better performance, and more stable output performance. In recent years, researchers at home and abroad have designed and fabricated various 2 µm SLM all-solid-state pulsed lasers. According to the specific application scenario, the most suitable SLM selection scheme is chosen, and researchers have obtained 2 µm SLM pulsed lasers with different characteristics and successfully applied them to several fields. However, there are still some technical challenges to be overcome in the development of the current 2 µm SLM all-solid-state pulsed laser technology. In this paper, the common 2 µm single-mode all-solid-state pulsed laser technologies with the ring cavity, twisted-mode cavity, volume Bragg grating and injection-seeded method are analyzed and summarized.  Progress  This paper reviews the research progress of 2 µm SLM all-solid-state pulsed laser technology, in conjunction with its applications across various fields. It introduces the working principles and characteristics of SLM selection techniques such as the ring cavity, twisted-mode cavity, volume Bragg grating, and injection-seeded method. The laser output characteristics of different structures, including central wavelength, output energy, pulse width, full width at half maximum (FWHM) of the spectrum, pulse repetition rate, and beam quality factor, are summarized based on different SLM selection techniques. The results indicate that the 2 µm SLM all-solid-state pulsed laser has made significant strides in single pulse energy, spectral line width, and stability. It can achieve high-energy SLM laser output with a line width on the order of MHz and pulse repetition frequency on the order of kHz. However, the output pulse width remains wide (on the order of nanoseconds), the structure is complex, and the thermal effect is pronounced. Finally, the paper analyzes the current technical bottlenecks, provides corresponding solutions, and prospects the future development of 2 µm SLM all-solid-state pulse lasers.  Conclusions and Prospects  Driven by the escalating demand for practical applications, 2 µm SLM all-solid-state pulsed lasers are evolving rapidly towards miniaturization, enhanced stability, high efficiency, narrow spectral linewidth, and substantial output energy. Future development trends are expected to focus on further advancements in output performance and the exploration of innovative methods for realizing 2 µm SLM all-solid-state pulsed lasers. Moreover, with the progression of laser technologies such as longitudinal-mode selection, pulse width compression, and thermal management, coupled with the continuous exploration of new gain media and laser structures, the comprehensive performance of 2 µm SLM all-solid-state pulse lasers is anticipated to be further improved to cater to diverse application requirements.
A high-precision self-calibration method for laser tracker without common points
Qi Zhijun, Zhu Donghui, Luo Tao, Miao Xuece, He Xiaoye
2024, 53(2): 20230607.   doi: 10.3788/IRLA20230607
[Abstract](75) [FullText HTML](19) [PDF 1921KB](35)
  Significance  Because the distance measurement error of the laser tracker is much smaller than the angle measurement error, a high-precision coordinate measurement system composed of multiple laser trackers is widely used in large-scale space measurement. The system requires self-calibration before measurements, which is a process that determines the distance between centers of different laser trackers. Although the method based on common points can achieve higher accuracy, it has high requirements for the measurement environment and has a high workload. Although the method based on sphere fitting is simple to operate, its self-calibration accuracy is low. In general, it is difficult to balance measurement efficiency and measurement accuracy with current methods.  Progress  To overcome the shortcomings of current methods, we propose a self-calibration method based on triangular structure. Firstly, error analysis for the method based on circle fitting is conducted in two-dimensional space. For simplification, laser tracker A is assumed to measure the target ball on laser tracker B at a distance of 5 m. The angle measurement error of A will make the measurement error of points on the circle more than 30 μm through the amplification effect of length, and reduce the self-calibration accuracy further. Second, we notice that the laser tracker B also has angle observations, and propose a new method that uses angle observations from B coupled with distance observations from A, which can avoid the big error from laser tracker A. The function model based on the triangular structure is established, and the self-calibration results are obtained through iterative optimization. Compared with the method based on circle fitting, the advantages of the new method are analyzed quantitatively. Finally, our method is easily applicable to three-dimension measurement.  Results  We verify the superiority of this algorithm through simulated and actual measurements. In the two-dimensional simulated experiment, the true value of self-calibration is set to 5 m. In 100 repeated experiments, most of the absolute deviations based on sphere fitting were greater than 20 μm, while the absolute deviations of the proposed method were less than 10 μm. The root mean square error (RMSE) of the proposed algorithm in this article was 20.98% of RMSE for the sphere fitting method. Moreover, the number of points that the algorithm in this paper needs to measure was significantly less than the method based on spherical fitting. The measurement was implemented in the alignment laboratory of the National Synchrotron Radiation Laboratory (NSRL). Two Leica AT930 laser trackers were used and their true distance was 7 231.548 8 mm. We tested the proposed method and the method based on sphere fitting ten times. The mean absolute bias and standard deviation for the former method were only 6.21 μm and 2.44 μm respectively, while the bias and standard deviation for the latter one was 21.30 μm and 7.37 μm. The proposed method, which had better repeatability, showed the superiority of accuracy in real measurement.  Conclusions and Prospects  We analyze that the reason for the decrease in accuracy of the self-calibration method based on sphere center fitting is the low accuracy of the angle observation values of the aiming laser tracker. The self-calibration method based on a triangular structure is proposed. This method uses the angle on the target laser tracker to match the distance observation with high accuracy. The accuracy advantages of this algorithm are verified through quantitative analysis and two experiments. The algorithm in this paper can complete self-calibration between laser trackers without common points and improves measurement efficiency while ensuring accuracy.
Technics of continuous-wave fiber laser cutting of thin-wall metal materials
Zhai Zhaoyang, Li Xinxin, Zhang Yanchao, Liu Zhongming, Du Chunhua, Zhang Huaming
2024, 53(2): 20230569.   doi: 10.3788/IRLA20230569
[Abstract](50) [FullText HTML](9) [PDF 4489KB](24)
  Objective  Thin-wall components exhibit characteristics such as lightweight, high strength to weight ratio, excellent heat dissipation, and good vibration and acoustic performance. In the aerospace industry, a growing demand for thin-wall components is observed, making precision laser cutting of thin-wall metal components a hot research topic for scholars at home and abroad. With the diversification of design for thin-wall metal components in the industrial sector, there are higher requirements for the quality of cut surfaces, even during high speed laser cutting. Laser cutting quality can be affected by many factors, but there have been limited studies on the comprehensive and interrelated effects of process parameters such as defocus amount, cutting speed, and laser power on burr thickness and slag splash zone width, particularly for ultrathin metal materials. Therefore, this study conducted laser cutting experiments on 0.2 mm thick 304 stainless steel sheets to analyze the mechanisms behind burr and slag splash formation. By adjusting process parameters such as cutting speed, laser power, and defocus amount, the study systematically summarized the variations in burr thickness and slag splash zone width for laser cutting of 304 stainless steel workpieces. Through process optimization, the study aimed at identifying the best combination of processing parameters.  Methods  A single factor experimental approach (Tab.1) was employed in this article to investigate the effects of power and defocus distance at different laser velocities on the thickness of burrs and the width of the slag splash zone. The applicable parameter ranges for laser cutting of thin-wall components were summarized. Optimal parameters for laser cutting were identified through comparative experiments (Tab.2). The clamping method for stainless steel thin plates was optimized, and the hypothesis is validated using experimental results (Fig.12).  Results and Discussions   Variation patterns of burr thickness (Fig.1, Fig.3) and slag splash zone width (Fig.5, Fig.7) at different cutting speeds, powers, and defocus amounts were summarized using single factor experimental methods. An analysis and optimization of the cutting technique for stainless steel thin plates was conducted, resulting in the identification of the optimal combination of processing parameters. Additionally, the best clamping methods for thin-wall metal components were determined, considering various processing conditions and shapes.  Conclusions  The burr thickness increases with an increase in laser power, decreases initially, and then increases with an increase in cutting speed. It also gradually increases with an increase in the focus position. The width of the slag splash zone increases with higher laser power, decreases as cutting speed increases, and exhibits minor fluctuations with an increase in the focus position. A comparison was conducted to assess the impact of different clamping methods on laser cutting results, leading to the determination of laser processing clamping methods for thin-wall metal parts. When the required workpiece shape is relatively simple, and precision requirements are not high, pneumatic clamping can be employed for processing. However, for more complex shapes with smaller sizes and higher precision requirements, supporting type is necessary. Based on the analysis of processing results, better processing results can be achieved for 0.2 mm thick 304 stainless steel sheets when the laser power is set to 125 W, the cutting speed is 10 m/min, the auxiliary gas pressure is 1.2 MPa, and the focus position ranges between −0.3 mm and −0.5 mm.
Research and application progress of laser technology in diamond processing
Ye Sheng, Zhao Shangman, Xing Zhongfu, Peng Zhiyong, Zheng Yuting, Chen Liangxian, Liu Jinlong, Li Chengming, Wei Junjun
2024, 53(2): 20230567.   doi: 10.3788/IRLA20230567
[Abstract](108) [FullText HTML](37) [PDF 7437KB](46)
  Significance   As an efficient non-contact processing method, laser processing is an ideal processing method for super-hard brittle materials such as diamond. The high-energy laser ablation of diamond greatly improves the processing efficiency of diamond, and the ultra-fast laser processing of diamond ensures the processing accuracy to the greatest extent. At present, laser is widely used in diamond cutting, lapping, micro-grooves and other aspects. Clear diamond laser interaction mechanism and processing control mechanism for laser processing diamond industrial investment laid a foundation. Due to the limitations of traditional machining methods, laser processing methods at home and abroad are the focus of research and development technology. It is foreseeable that laser processing in the field of diamond processing will have a larger proportion, which is of great significance for the back-end application and assembly of diamond.  Progress  Firstly, the laser generation mode and laser processing mechanism are introduced, including laser generation and main characteristics, how the diamond absorbs laser energy, and the changes of diamond properties and surface morphology caused by the laser. At present, the main research is nanosecond laser and femtosecond laser, which are currently two typical types of laser used for diamond processing, according to the laser wavelength division commonly used a green laser (532 nm), near infrared laser (1064 nm) and ultraviolet laser. Pulsed laser is the focus of current research, for diamond processing, short wavelength and small pulse duration processing quality is higher, while longer pulse duration pulsed laser processing efficiency is higher. With the development of technology, laser processing systems in various countries are developing in a more compatible direction, that is, to achieve good processing quality and high removal efficiency at the same time.Countries have successfully carried out a number of technical studies in the field of laser diamond processing, which has been widely used in production. According to the investigation and development, the pulse width length of the laser has a decisive impact on the processing effect. For the different processing types of diamond, the multi-method joint processing method is currently used to meet the specific requirements of various tasks. For the actual processing needs of diamond, mainly including laser cutting, laser drilling, laser micro-grooves and lapping and other related fields. According to different processing types, the development status and technical highlights of laser diamond processing in recent years are summarized (Tab. 4). Through comprehensive investigation, the future development trend and common technical means of laser diamond processing are revealed.  Laser diamond processing is one of the current mainstream processing methods, and compared with traditional machining methods, laser processing technology can achieve automation, low-cost, high-precision, and can obtain more accurate processing effects. At the end of this paper, the application prospect of laser diamond processing is prospected in order to provide reference for the development and research of domestic super-hard material processing technology.  Conclusions and Prospects  The field of laser diamond processing is still booming. At the same time, diamond processing needs are complex and diverse. For different processing types and application requirements, the type of laser used, the mode of processing operation and the selection of processing parameters need to be analyzed in detail according to each case. The research progress of laser diamond processing industry in recent years is summarized in order to provide some reference for the design and optimization of laser diamond processing in the future. Laser processing technology will also be more mature to meet a variety of processing needs, and gradually to high efficiency, high precision, low damage, highly integrated and production automation. In the foreseeable future, the application prospect of laser processing diamond will be more and more broad.
Materials & Thin films
Study on SERS substrate preparation of rice flower type silver/titanium nitride thin films and detection of Rhodamine 6G
Di Zhigang, Gao Jianxin, Jia Chunrong, Zhou Hao, Li Jinxin, Liu Huaju, Wei Hengyong
2024, 53(2): 20230367.   doi: 10.3788/IRLA20230367
[Abstract](32) [FullText HTML](10) [PDF 2804KB](7)
  Objective  Rhodamine 6G (R6G), also known as Rose Red 6G, is a fluorescent dye with water solubility and is often used in optics, laser optics, dyes and other fields. It is very toxic to humans, and there is a risk of cancer from long-term exposure or use of rhodamine, so it is included in illegal additives. However, due to its low price and good coloring properties, it is often used by unscrupulous businessmen in textiles, medicine, food, etc. The current methods for detecting rhodamine are mainly high performance liquid chromatography and liquid chromatography-tandem mass spectrometry. However, the operation process is tedious and the cost is high. Therefore, it is necessary to design a new method for the rapid detection of rhodamine 6G.   Methods  The performance of the substrate and the minimum detection limit of rhodamine 6G were investigated. The finite element method was used to simulate the milky substrate, and a milky silver/titanium nitride composite SERS substrate was designed and Raman detected for rhodamine 6G using electrochemical deposition method.   Results and Discussions   In order to obtain the SERS enhancement effect of silver nanosubstrates with different morphological rice flower type structures and thus optimize the SERS substrate design, the electric field intensity simulation experiments were performed by the finite element method to simulate the changes of field intensity under different central sphere radius r, rice flower petal axes a, b, c, and central sphere and petal spacing d. The SERS enhancement factor was calculated. Subsequently, the substrates were prepared by electrochemical deposition and the effects of voltage value and the concentration ratios of trisodium citrate and AgNO3 on the substrate structure and properties were investigated, so as to prepare the rice flower type silver/titanium nitride thin film substrate with the closest morphology to the idealized physical model. It was then used for trace detection of rhodamine 6G (R6G) to investigate the Raman enhancement effect of this substrate as well as its stability. The experimental results show that the obtained rice flower type TiN-Ag composite SERS substrate is closest to the idealized model simulation morphology when the deposition voltage is 2 V and the concentration ratio of trisodium citrate to AgNO3 is 1∶1. The optimal enhancement factor of this substrate was calculated to be 1015, and the detection limit of rhodamine 6G was up to 10−13 mol/L.   Conclusions  Based on the finite element method simulation, the field strength of the rice flower structure substrate was compared under the conditions of different radius of the central sphere, rice flower petal axis and spacing between the central sphere and petal, and the best enhancement factor of the rice flower silver/titanium nitride thin film substrate was obtained as 1015. The rice flower TiN-Ag composite SERS substrate was obtained with the closest morphology to the rationalization model simulation and the lowest detection concentration of rhodamine 6G of 10−13 mol/L by comparative experiments when the deposition voltage was 2 V and the concentration ratio of AgNO3 was 1∶1.