Neural network-based image reconstruction in swept-source optical coherence tomography using undersampled spectral data. (arXiv:2103.03877v1 [eess.IV])

Optical Coherence Tomography (OCT) is a widely used non-invasive biomedical
imaging modality that can rapidly provide volumetric images of samples. Here,
we present a deep learning-based image reconstruction framework that can
generate swept-source OCT (SS-OCT) images using undersampled spectral data,
without any spatial aliasing artifacts. This neural network-based image
reconstruction does not require any hardware changes to the optical set-up and
can be easily integrated with existing swept-source or spectral domain OCT
systems to reduce the amount of raw spectral data to be acquired. To show the
efficacy of this framework, we trained and blindly tested a deep neural network
using mouse embryo samples imaged by an SS-OCT system. Using 2-fold
undersampled spectral data (i.e., 640 spectral points per A-line), the trained
neural network can blindly reconstruct 512 A-lines in ~6.73 ms using a desktop
computer, removing spatial aliasing artifacts due to spectral undersampling,
also presenting a very good match to the images of the same samples,
reconstructed using the full spectral OCT data (i.e., 1280 spectral points per
A-line). We also successfully demonstrate that this framework can be further
extended to process 3x undersampled spectral data per A-line, with some
performance degradation in the reconstructed image quality compared to 2x
spectral undersampling. This deep learning-enabled image reconstruction
approach can be broadly used in various forms of spectral domain OCT systems,
helping to increase their imaging speed without sacrificing image resolution
and signal-to-noise ratio.

Source: https://arxiv.org/abs/2103.03877

webmaster

Related post