Abstract:
Significance As a novel computational imaging technology, coherent diffraction imaging (CDI) offers advantages such as high resolution, lens-free, and low cost, making it highly promising for applications in quantitative phase reconstruction, high-resolution imaging, and X-ray imaging. Traditional CDI relies on coherent light illumination, demanding very high temporal and spatial coherence from the light source. However, in practical application, due to the uncertainty relationship between energy and time, the defective mechanisms of beam generation, the light source typically exhibits some spectral broadening. This causes the actual diffraction patterns to be a superposition of diffraction patterns from different coherent modes, rendering traditional reconstruction theories impractical and unable to achieve high-precision complex amplitude reconstruction of objects. Conventional solutions often attempt to enhance the coherence of the light source through spatial and spectral filtering, but these methods result in energy waste and pulse broadening, failing to fundamentally address the issue of diffraction imaging under low coherence illumination. Therefore, research on diffraction imaging under partially coherent or incoherent light sources performs great significance for reducing the cost of diffraction imaging, achieving attosecond diffraction imaging, and enabling multispectral diffraction imaging, has become a research hotspot in recent years.
Progress First, the basic principles of monochromatic CDI are introduced, including the experimental arrangement, sampling ratio requirement, and phase retrieval algorithms. The theory of CDI is fundamentally established based on coherent illumination. Therefore, applying it to low-coherence illumination makes the existing CDI schemes unsuitable. According to the theory of the coherent mode of light, the broadband diffraction pattern is a direct sum of narrowband diffraction patterns from different spectrums. Hence, the key to realizing diffraction imaging with broadband illumination is to decompose the different coherent modes from the mixed diffraction pattern. One method assumes that the transmittance of the measured object is consistent across different wavelengths. Under this assumption, the speckle patterns of different coherent modes only exhibit scaling differences, and can be directly decomposed though a single-shot pattern. However, this assumption does not account for the differences in the complex transmittance of the object across wavelengths, thus only allowing the reconstruction of the image at a specific wavelength. If the object's spectral response varies significantly, reconstruction can fail. To overcome this problem, some methods discuss to reconstruct the coherent modes with spectral difference, the cost of reconstructing spectral diffraction information involves introducing methods such as multiple exposures, structured illuminations, or ptychography. The reconstruction of spectral information further enhances the measurement capabilities of diffraction imaging, offering significant application value in fields such as material analysis and biological imaging. However, considering that its reconstruction requires multiple exposures, further development is still needed in the field of attosecond imaging.
Conclusions and Prospects The development of broadband diffraction imaging theory not only reduces the cost of traditional CDI schemes but also improves the imaging quality under low coherence illumination. It shows significant research value in fields such as high-dimensional spectral imaging and ultrafast imaging. However, existed schemes have certain issues in terms of imaging time, spectral accuracy, and object adaptability, making them still a distance away from practical application. Enhancing spectral accuracy in broadband diffraction imaging is a key issue to achieve spectral analysis of complex amplitude information. Accelerating imaging procedure and reducing the sampling rate requirements for multi-coherent mode decomposition are prerequisites for achieving attosecond diffraction imaging. It is believed that these issues will be gradually resolved by combining technologies such as compressed sensing theory and artificial neural networks.