Skip to main content

Towards Computational Acoustic Cameras: Neural Deconvolution and Rendering for Synthetic Aperture Sonar

Suren Jayasuriya

Assistant Professor, Arizona State University

Abstract: Acoustic imaging leverages sound to form visual products with applications including biomedical ultrasound and sonar. In particular, synthetic aperture sonar (SAS) has been developed to generate high-resolution imagery of both in-air and underwater environments. In this talk, we explore the application of implicit neural representations and neural rendering for SAS imaging and highlight how such techniques can enhance acoustic imaging for both 2D and 3D reconstructions. Specifically we discuss challenges of neural rendering applied to acoustic imaging especially when handling the phase of reflected acoustic waves that is critical for high spatial resolution in beamforming. We present two works on enhanced 2D circular SAS deconvolution in air as well as a general neural rendering framework for 3D volumetric SAS. In addition, we present recent research on using Gaussian splatting for camera + sonar fusion to improve optical 3D reconstruction. This research is the starting point for realizing the next generation of acoustic+optical cameras for a variety of applications in air and water environments for the future.

Bio: Dr. Suren Jayasuriya is an assistant professor at Arizona State University, in the School of Arts, Media and Engineering (AME) and Electrical, Computer and Energy Engineering (ECEE) since 2018. Before this, he was a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University in 2017. Suren received his Ph.D. in ECE at Cornell University in Jan 2017 and graduated from the University of Pittsburgh in 2012 with a B.S. in Mathematics (with departmental honors) and a B.A. in Philosophy. His research interests range from computational cameras, computer vision and graphics, and acoustic imaging/remote sensing.

Skip to content