Краткое изложение:
Benefiting from its single-photon sensitivity, single-photon avalanche diode
(SPAD) array has been widely applied in various fields such as fluorescence
lifetime imaging and quantum computing. However, large-scale high-fidelity
single-photon imaging remains a big challenge, due to the complex hardware
manufacture craft and heavy noise disturbance of SPAD arrays. In this work, we
introduce deep learning into SPAD, enabling super-resolution single-photon
imaging over an order of magnitude, with significant enhancement of bit depth
and imaging quality. We first studied the complex photon flow model of SPAD
electronics to accurately characterize multiple physical noise sources, and
collected a real SPAD image dataset (64 $\times$ 32 pixels, 90 scenes, 10
different bit depth, 3 different illumination flux, 2790 images in total) to
calibrate noise model parameters. With this real-world physical noise model, we
for the first time synthesized a large-scale realistic single-photon image
dataset (image pairs of 5 different resolutions with maximum megapixels, 17250
scenes, 10 different bit depth, 3 different illumination flux, 2.6 million
images in total) for subsequent network training. To tackle the severe
super-resolution challenge of SPAD inputs with low bit depth, low resolution,
and heavy noise, we further built a deep transformer network with a
content-adaptive self-attention mechanism and gated fusion modules, which can
dig global contextual features to remove multi-source noise and extract
full-frequency details. We applied the technique on a series of experiments
including macroscopic and microscopic imaging, microfluidic inspection, and
Fourier ptychography. The experiments validate the technique's state-of-the-art
super-resolution SPAD imaging performance, with more than 5 dB superiority on
PSNR compared to the existing methods.