|
|
|
|
|
|
|
|
|
|
![]() |
Atmospheric turbulence is a major source of image degradation in long-range imaging systems.
Although numerous deep learning-based turbulence mitigation (TM) methods have been proposed,
many are slow, memory-hungry, and do not generalize well. In the spatial domain, methods based
on convolutional operators have a limited receptive field, so they cannot handle a large spatial
dependency required by turbulence. In the temporal domain, methods relying on self-attention can,
in theory, leverage the lucky effects of turbulence, but their quadratic complexity makes it difficult
to scale to many frames. Traditional recurrent aggregation methods face parallelization challenges.
In this paper, we present a new TM method based on two concepts: (1) A turbulence mitigation
network based on the Selective State Space Model (MambaTM). MambaTM provides a global receptive
field in each layer across spatial and temporal dimensions while maintaining linear computational
complexity. (2) Learned Latent Phase Distortion (LPD). LPD guides the state space model. Unlike
classical Zernike-based representations of phase distortion, the new LPD map uniquely captures
the actual effects of turbulence, significantly improving the model’s capability to estimate degradation
by reducing the ill-posedness. Our proposed method exceeds current state-of-the-art networks on various
synthetic and real-world TM benchmarks with significantly faster inference speed.
Videos are from the URG-T and BRIAR dataset, all subjects give the consent to show their images.
If videos are not played correctly, please consider to use Chrome or download them
Compare with DATUM (CVPR 2024) and Turb-Seg-Res (CVPR 2024)
The LPD-based re-degraded results
restoration performance
Re-degradation based on the estimated LPD with restored video above
![]() |
![]() |