Paper ID: 2405.09864

Solar multi-object multi-frame blind deconvolution with a spatially variant convolution neural emulator

A. Asensio Ramos

The study of astronomical phenomena through ground-based observations is always challenged by the distorting effects of Earth's atmosphere. Traditional methods of post-facto image correction, essential for correcting these distortions, often rely on simplifying assumptions that limit their effectiveness, particularly in the presence of spatially variant atmospheric turbulence. Such cases are often solved by partitioning the field-of-view into small patches, deconvolving each patch independently, and merging all patches together. This approach is often inefficient and can produce artifacts. Recent advancements in computational techniques and the advent of deep learning offer new pathways to address these limitations. This paper introduces a novel framework leveraging a deep neural network to emulate spatially variant convolutions, offering a breakthrough in the efficiency and accuracy of astronomical image deconvolution. By training on a dataset of images convolved with spatially invariant point spread functions and validating its generalizability to spatially variant conditions, this approach presents a significant advancement over traditional methods. The convolution emulator is used as a forward model in a multi-object multi-frame blind deconvolution algorithm for solar images. The emulator enables the deconvolution of solar observations across large fields of view without resorting to patch-wise mosaicking, thus avoiding artifacts associated with such techniques. This method represents a significant computational advantage, reducing processing times by orders of magnitude.

Submitted: May 16, 2024