Download Complex-valued deep-learning model for 3D SAR modeling and image generation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
2023 57th Asilomar Conference on Signals, Systems, and Computers | 979-8-3503-2574-4/23/$31.00 ©2023 IEEE | DOI: 10.1109/IEEECONF59524.2023.10477046
Complex-valued deep-learning model for 3D SAR
modeling and image generation
Nithin Sugavanam
Emre Ertin
Department of ECE
The Ohio State University
Columbus, OH, USA
[email protected]
Department of ECE
The Ohio State University
Columbus, OH, USA
[email protected]
Abstract—Synthetic aperture radar (SAR) collects samples
of the 3D Spatial Fourier transform on a two-dimensional
manifold corresponding to the backscatter data of wideband
pulses launched from different look angles along an aperture.
Regularized inversion methods typically yield a sparse set of
points and artifacts due to limited measurements in the elevation
domain. We present a coordinate-based multi-layer perceptron
(MLP) that enforces the smooth surface-prior and models the
complex-valued scattering coefficients. The 3D geometry is represented using the signed distance function, while the scattering
coefficients are represented using real and imaginary channels.
Since estimating a smooth surface from a sparse and noisy point
cloud is an ill-posed problem, in this work, we regularize the
surface estimation by sampling points from the implicit surface
representation during the training step. The loss function also
incorporates the consistency between the predicted complexvalued scattering coefficients and the true scattering coefficients.
We validate the model’s reconstruction ability using the Civilian
vehicle data domes.
Index Terms—Volumetric synthetic aperture radar, Implicit
Neural Representations, Complex valued scattering
I. INTRODUCTION
Synthetic aperture radar (SAR) collects samples of the
3D Spatial Fourier transform on a two-dimensional manifold
corresponding to the backscatter data of wideband pulses
launched from different look angles along an aperture [1],
[2]. Traditional 3D reconstruction techniques aggregate and
index phase history data in the spatial Fourier domain and
apply an inverse 3D Non-uniform Fourier Transform to the
data to obtain volumetric imagery [3], [4]. Obtaining highresolution imagery requires the data collected to be densely
distributed in both the azimuth and elevation angle, which is
often not satisfied in operational scenarios. For instance, the
sampling in elevation dimension is sparse and non-uniform
in the GOTCHA dataset [5], [6]. To cope with the sparsely
sampled data in elevation, regularized inversion methods [7]
that combine non-uniform fast Fourier transform method to
model the forward operator with regularization priors for the
scene that promote structured solution such as sparsity [8],
limited persistence in the viewing angle domain [9]–[11],
vertical structures [12]. These regularization-based approaches
promote dominant scattering mechanisms [13], [14] and produce sparse point clouds in the spatial domain.
Classical 3D reconstruction: A common approach for 3D
reconstruction is formulated as a regularized inverse problem
in the phase-history domain by utilizing sub-apertures and
imposing regularization to enforce sparsity in the spatial domain and promote correlation of scattering coefficients in the
neighboring sub-apertures [15]. The backscattered response of
the object is also modeled as a superposition of the backscattered response of 3D canonical scattering mechanisms such
as dihedral, trihedral, plate, cylinder, and top-hat [16]. References [9], [12], [17] jointly model the sparsity in scattering
center locations and the persistence of scattering coefficients
in the azimuth domain.
Deep learning based approach The 3D SAR imaging problem
is formulated using an unrolled network in reference [18].
The iterations in the approximate message-passing algorithm
to solve the inverse problem are implemented as a sequence of
layers in a cascaded form. The LIDAR-based 3D reconstruction from photon count measurements is presented in [19]. The
3D reconstruction utilizes a maximum likelihood estimation of
the depth under the assumption of Poisson measurement noise.
It utilizes a plug-and-play denoiser for point-cloud resampling
to enforce that these points are sampled from a smooth
surface as presented in [20]–[22]. Recent developments in
3D scene reconstruction from multi-view 2D imagery utilize
continuous functions parameterized by deep, fully connected
networks, which are referred to as multilayer perceptrons
(MLPs). These coordinate-based MLPs are trained to map
the spatial coordinates to output a representation of shape
or density for each spatial location. Coordinate-based MLPs
have been used to represent images [23], volume density [24],
occupancy [25], signed distance functions [26], and magnetic
resonance imagery [27], [28]. The Fourier feature layer is used
as the input layer in the coordinate-based MLPs that operate
on the spatial coordinates p and viewing direction v. These
Fourier features are given by
Research conducted in this publication was funded in part by an Air Force
SBIR grant FA8650-22-C-1009 through a subcontract to Kitware, Inc.
979-8-3503-2574-4/23/$31.00 ©2023 IEEE
1075
γ(p) = [a1 cos(2πb1 T p) a1 sin(2πb1 T p)
···
T
T
T
am cos(2πbm p) am sin(2πbm p)] .
Asilomar 2023
Authorized licensed use limited to: Univ de Alcala. Downloaded on October 23,2024 at 21:32:00 UTC from IEEE Xplore. Restrictions apply.
γ(v) = [c1 cos(2πd1 T v) c1 sin(2πd1 T v)
···
T
T
T
II. DENOISING SPARSE POINT CLOUDS USING
SURFACE PRIORS AND MODELING
VIEW-DEPENDENT SCATTERING COEFFICIENTS
cm cos(2πdm v) cm sin(2πdm v)] .
A signed distance function-based neural-network [26] has been
utilized to model object shapes such that the zero-level set or
the iso-set of the signed distance function coincides with the
surface of the object. The vectors b1 , · · · bm and d1 , · · · dm
are the weights used in the positional and viewing direction
encoding. The signed distance function (SDF) is given by
SDF (p) =





0, p ∈ Ω
+s, p ∈ Ω+
−s, p ∈ Ω− ,
(1)
where s > 0 and Ω represents the object boundary, Ω+
represents the region outside the object and Ω− represents
the region inside the object.
Reference [29] proposed an end-to-end neural architecture
system that learns 3D geometries from masked 2D images
and camera estimates. The color of a pixel is expressed as a
differentiable function termed an Implicit Differentiable Renderer (IDR) in the three unknowns of a scene: the geometry,
its appearance, and the camera properties. The 3D shape of
the object is represented as the zero-level set of a neural
network using a signed distance function. Reference [30]
proposed the regularized loss-function for coordinate-based
MLP denoted by f (p; Ψ) which aims to enforce the network
output as the level-set of the signed distance function. Given
p ∈ P ∼ PP (p), for a distribution PP (p) the authors define
the term Ep (∥∇p f (p; Ψ)∥2 − 1). This term called the Eikonal
term, encourages the gradients to be of unit ℓ2 -norm. The
motivation for this term is due to the result in [31] that
guarantees the solution to have a zero-level set on the set P.
The solution is not unique, but the authors show that utilizing a
stochastic gradient descent to optimize the network parameter
results in solutions close to a signed distance function with
a smooth zero-level set with high probability. Reference [32]
provides a method to sample points from a neural network
that represents a surface in the form of zero-level sets. The
technique utilizes Newton’s method to project any arbitrary
point to another point on the zero-level set representing the
surface. Reference [33] samples points from an implicit neural
surface, defines them as iso-points and utilizes these iso-points
to refine the implicit representation via optimization.
In this work, we jointly model the surfaces of the object
and the angle-dependent scattering centers using an implicit
neural network. We implement a cascaded neural network
architecture such that the first section learns the surface
representation from sparse SAR point clouds recovered from
phase history data, and the next section models the scattering
coefficients. Next, we present the point cloud extraction,
denoising, and neural surface representation and scattering
model in Section II; describe the datasets used in Section III;
and conclude with the validation results in Section IV.
We present a method to recover the point-cloud representation of the object from SAR phase history data. The
measurements from the SAR are collected over azimuth angles [θ1,e , · · · , θNP ,e ] and elevation angles [ϕ1,e , · · · , ϕNp ,e ],
with elevation passes e = 1, · · · , Nel and frequency
points [f1 , · · · , fNF ]. The phase history data are denoted by
Ye = [Y1,e Y2,e · · · YNP ,e ] where Y = [Y1 ; · · · ; YNel ] ∈
CNF ×NP ×Nel . We model the object as a collection of dominant scattering centers, and each scattering center is isotropic
over a sub-aperture. We assume the pulses collected over
different viewing angles are grouped in Ns sub-apertures
over Nel elevation passes. The backscattered signal in a subaperture m is defined as
K
X
j4πfi
Ȳ (fi , θj,e , ϕj,e ; θ̄m , ϕ̄m ) =
gk (θ̄m , ϕ̄m ) exp −
c
k=1
(xk cos ϕj,e cos θj,e + yk sin θj,e cos ϕj,e + zk sin ϕj,e )) ,
+ n(fi , θj , ϕk )
(2)
where i = 1, · · · , NF , j = 1, · · · , NP ,e = 1, · · · , Nel ,
m = 1, · · · , NS , θ̄m is the mean azimuth angle in sub-aperture
m and ϕ̄m is the mean elevation angle in sub-aperture m . The
K-sparse scattering centers are located at spatial coordinate
pk = [xk , yk , zk ] with sub-aperture dependent scattering
coefficients sm
k ∈ C. The region of interest is discretized
into Nx × Ny × Nz voxels, and the resolution of these
grid points is chosen according to the range resolution. The
measurement noise and modeling mismatches are combined
in the complex-valued noise term n(fi , θj,e , ϕj,e ). The subaperture measurements can be expressed as
m
Y (θ̄m , ϕ̄m ) = F3D
(Gm ) + N ,
(3)
NF ×NS
where Y (θ̄m , ϕ̄m ) ∈ C
× Nel is the sub-aperture phase
m
history data, F3D
(.) is the non-uniform Fourier transform [34]
that maps the spatial coordinates to the K-space measurement
space, and Gm ∈ CNx ×Ny ×Nz are the complex-valued scattering coefficients for each spatial location.
Given the measurements, the inverse problem of recovering
the scattering coefficients is solved using the regularized least
squares method with sparsity promoting prior. The regularized
least squares problem is given by
min∥Gm ∥1
Gm
m
Subject to ∥Y (θ̄m ) − F3D
(Gm )∥2 ≤ σ 2 ,
where σ 2 is the modeling error energy. The recovered scattering centers have a sparse voxel representation for the subaperture m. The detected point-clouds are denoted as P . The
initial estimate of the normal ni ∈ NP for each point pi ∈ P
is computed using a local principal component analysis. For
each point, we find the nearest neighbors with a distance of at
most 0.3m. We utilize the signed distance function defined
in Eq. (1) to represent the object and learn a coordinatebased MLP to predict the zero-level set of the SDF. Since the
1076
Authorized licensed use limited to: Univ de Alcala. Downloaded on October 23,2024 at 21:32:00 UTC from IEEE Xplore. Restrictions apply.
SAR point cloud is sparse, we use the concept of iso-points
developed in Reference [33]. The iso-points set Qiso are the
points sampled from the zero-level set of the neural network
such that
Qiso = {q : f (q, Ψ) = 0}
(4)
The sampling process to obtain dense samples on the isosurface involves the following steps: projection, uniform sampling, and up-sampling. The projection operation is performed
on a randomly sampled point q in the neighborhood of set P
to project it back on the surface represented by a zero-level
set of the neural network f (q; Ψ) using Newton’s iterations.
Given a point q k−1 , the k − th update step is given by
!
J T (q k−1 )
f (q k−1 )
(5)
q k = q k−1 − π
∥J (q k−1 )∥2
such that J(q i ) is the Jacobian of the network parameters with
respect to the spatial coordinates q i .
x
min(∥x∥, τ0 )
(6)
π(x) =
∥x∥
π(x) is the clipping operation to avoid the possible noisy
update step obtained from the non-smooth neural implicit
surface representation, and τ0 is the maximum preset threshold
computed based on the bounding box size around the region
of interest. Newton’s update steps are repeated until the
f (q k ) ≤ 1e − 6.
This projection operator still produces non-uniform samples
from the surface and can lead to regions with no samples.
Following the uniform sampling step to move samples away
from the high-density region, the resulting set of points is
projected on the zero level-set of the SDF using the iterations
in Eq. (5).
Finally, a dense point cloud is created for a desired density
using a simplified version of the edge-aware resampling (EAR)
technique [22]. The points are resampled such that these points
are pushed away from the edges to avoid discontinuity in the
normal direction while penalizing the formation of clusters.
Upsampling is performed by adding more samples to lowdensity point regions by adding new points to the set at these
regions. After the upsampling step, the generated points are
again projected back on the SDF using Newton’s iterations
in Eq. (5). The Jacobian of the SDF output represents the
normal of the surface. We utilize this predicted normal, the
viewing direction, and the spatial coordinate to predict the
real and imaginary parts of the scattering coefficients of the
spatial coordinate.
Proposed solution: In this section, we present the complete
network architecture to jointly predict the signed distance
function and the view-dependent scattering coefficient and
discuss the loss function to learn the network parameters.
Network architecture: We utilize the network architecture
shown in Figure 1 proposed in [26] and [33] to estimate the
signed distance function. The input Fourier feature layers map
the spatial coordinates to NF = 9 frequencies and Nv = 64
Fourier feature map for the viewing angle. The SDF section
of the network consists of 9 hidden layers. The hidden layers
have a latent feature size of 512 with a soft-plus activation
function. The transformed input layer is also fed in layer 4. The
output layer consists of a linear map and the tanh activation
function to predict the SDF. The network section that predicts
the scattering coefficients consists of a 10 hidden layer with
1024 latent feature size. Finally, the network weights and bias
are initialized with standard Gaussian variables.
Loss function: The detected point clouds from SAR measurements are sparse and noisy since these are aliased copies due
to non-uniform elevation sampling. These point clouds are
usually concentrated near edges and other dominant scattering
mechanisms. We aim to reconstruct the 3D object by enforcing
the surface prior and denoising the point cloud by estimating
the network parameters to learn a 3D representation of the
object of interest. The iso-points estimated from the implicit
representation serve as a consistent and smooth approximation
of the object that is refined during the training process. This
regularization ensures the training procedure does not overfit to
noise. Since the iso-points are updated throughout the training
process, we hypothesize the network learns the high-frequency
signals governed by the underlying geometry of the object. We
define three sets of point-clouds Qiso = {q : f (q; Ψ) = 0}
is the point-cloud consisting of iso-points, P is the pointcloud consisting of the training set and Qb is the point-cloud
consisting of the points sampled from the region of interest
using a uniform probability distribution. Since the iso-points
are uniformly distributed on the surface, these points can be
used to augment the supervision in undersampled areas by
enforcing SDF to be 0 on the iso-points set. The 0 condition
on the SDF is implemented through a sparsity promoting ℓ1
regularization in Eqs. (7) and (8). The SDF for non-surface
points is enforced to be non-zero by utilizing the exponential
term from Eq, (9). The Eikonal regularizer in Eq. (10) enforces
the network’s output to mirror the properties of signed distance
functions.
X
1
|f (q)|
|Qiso |
q∈Qi so
1 X
Lon =
|f (p)|
|P|
p∈P
1 X
Lof f =
exp (−α|f (p)|)
|Qb |
LisoSDF =
(7)
(8)
(9)
q∈Qb
LEik =
1
|Qbackground
X
S
Qiso |
q∈Qb
S
|1 − ∥J T (q)∥|
Qiso
(10)
We compute the iso-points’ normals using local principal
component analysis (PCA). The consistency between the Jacobian of the model and the normal estimated using a local
neighborhood is evaluated using the cosine similarity as shown
1077
Authorized licensed use limited to: Univ de Alcala. Downloaded on October 23,2024 at 21:32:00 UTC from IEEE Xplore. Restrictions apply.
Fig. 1: The network architecture consists of the input layer Fourier layer and 8 linear layers with a softplus activation function
and an output layer with a tanh activation function to predict the signed distance function.
TABLE I: Parameter setup.
Parameter
Bandwidth
Azimuth Angle in
sub-aperture
Azimuth
angle
range
∆θ
Elevation angles
λisoSDF
λisoN ormal
λonSDF
λnormal
λof f SDF
λeikonal
IV. RESULTS
Value Civilian vehicles
640 MHz
3.1 degrees
We evaluate the performance of the proposed architecture
shown in Figure 1 to reconstruct the complex-valued scattering
coefficients that also model the underlying 3D structure of
the object. The training dataset is obtained from the point
cloud recovered from regularized inverse methods. We recover
the scattering coefficients by minimizing the loss function
defined in (14). As seen in Figure 2, we note that the network
architecture can jointly model the scattering coefficients and
the 3D structure of the object using the SDF representation.
0 to 360 degrees
0.0625 degrees
[43.0625 43.2500 43.4375 43.8125 44.0625
44.3750 44.6250 44.7500]
200
5
200
20
400
5
V. C ONCLUSION
below
In this work, we presented a method that denoises the
sparse point cloud generated from the regularized inversion
X 1
LisoN ormal =
1 − |SC J T (q), nP CA (q) | methods that utilize the multi-pass SAR phase-history mea|Qi so|
q∈Qi so
surements, learns an implicit representation of the surface
(11) of the object using the signed distance function, and learns
1 X
the view-dependent complex-valued scattering coefficients for
LN ormal =
1 − |SC J T (p), nP CA (p) | ,
|P|
the detected scattering centers. In the experiments section,
p∈P
(12) we verified the reconstruction ability of the implicit network
to model the 3D structure of the object and the anglea.b
. The amplitude dependent scattering coefficients jointly. Currently, we use 3D
where the cosine similarity SC(a, b) = ∥a∥∥b∥
prediction loss is given by
supervision to model the vehicles, and in the future, we aim to
X
extend the current methodology to approximate the scattering
1
Lamp =
∥g(p, v) − gp (p, v)∥22
(13) behavior of vehicles using 2D supervision.
|P|
p∈P
R EFERENCES
The optimization objective is comprised of six parts:
L =λisoSDF LisoSDF + λisoN ormal LisoN ormal + λeik Leik
λonSDF LonSDF + λnormal Lnormal
+ λof f SDF Lof f SDF + λamp Lamp
(14)
III. E XPERIMENTS
We evaluate the proposed algorithm on the Civilian Vehicle
Data Domes dataset presented in [35]. The parameters used for
the experiments are listed in Table I. We utilize 8 elevation
passes to estimate the point cloud. The point cloud and the
estimated normal vectors are utilized to train the network to
estimate the SDF and scattering coefficients. We present the
reconstruction results in Section IV.
[1] L. C. Potter, E. Ertin, J. T. Parker, and M. Cetin, “Sparsity and
compressed sensing in radar imaging,” Proceedings of the IEEE 98,
pp. 1006–1020, June 2010.
[2] J. Ash, E. Ertin, L. C. Potter, and E. Zelnio, “Wide-angle synthetic aperture radar imaging: Models and algorithms for anisotropic scattering,”
IEEE Signal Processing Magazine 31(4), pp. 16–26, 2014.
[3] M. Ferrara, J. A. Jackson, and C. Austin, “Enhancement of multi-pass
3D circular SAR images using sparse reconstruction techniques,” in
Algorithms for Synthetic Aperture Radar Imagery XVI, 7337, p. 733702,
International Society for Optics and Photonics, 2009.
[4] C. D. Austin, E. Ertin, and R. L. Moses, “Sparse signal methods
for 3-D radar imaging,” IEEE Journal of Selected Topics in Signal
Processing 5(3), pp. 408–423, 2010.
[5] E. Ertin, C. D. Austin, S. Sharma, R. L. Moses, and L. C. Potter, “Gotcha
experience report: Three-dimensional SAR imaging with complete circular apertures,” in Algorithms for synthetic aperture radar imagery XIV,
6568, p. 656802, International Society for Optics and Photonics, 2007.
1078
Authorized licensed use limited to: Univ de Alcala. Downloaded on October 23,2024 at 21:32:00 UTC from IEEE Xplore. Restrictions apply.
(a) Recovery of real part
of scattering coefficients
(b) Recovered imaginary part
of scattering coefficients
(c) 3D Mesh generated for
Camry by SDF section
Fig. 2: Reconstruction ability of implicit network for approximating the real and imaginary part of the scattering coefficients
as a function of the viewing angle is shown in Figures 2aand 2b. The underlying 3D model representing the Camry vehicle is
shown in Figure 2c.
[6] C. D. Austin, E. Ertin, and R. L. Moses, “Sparse multipass 3d sar
imaging: Applications to the gotcha data set,” in Algorithms for Synthetic
Aperture Radar Imagery XVI, 7337, pp. 19–30, SPIE, 2009.
[7] N. Sugavanam and E. Ertin, “Recovery guarantees for MIMO radar
using multi-frequency LFM waveform,” in 2016 IEEE Radar Conference
(RadarConf), pp. 1–6, IEEE, 2016.
[8] N. Sugavanam and E. Ertin, “Interrupted SAR imaging with limited
persistence scattering models,” in 2017 IEEE Radar Conference (RadarConf), pp. 1770–1775, IEEE, 2017.
[9] N. Sugavanam and E. Ertin, “Limited persistence models for SAR
automatic target recognition,” in Algorithms for Synthetic Aperture
Radar Imagery XXIV, 10201, p. 102010M, International Society for
Optics and Photonics, 2017.
[10] N. Sugavanam, E. Ertin, and R. Burkholder, “Approximating bistatic
SAR target signatures with sparse limited persistence scattering models,”
in Int. Conf. on Radar, Brisbane, 2018.
[11] N. Sugavanam, E. Ertin, and R. Burkholder, “Compressing bistatic SAR
target signatures with sparse-limited persistence scattering models,” IET
Radar, Sonar & Navigation 13(9), pp. 1411–1420, 2019.
[12] N. Sugavanam and E. Ertin, “Models of anisotropic scattering for 3d
sar reconstruction,” in 2022 IEEE Radar Conference (RadarConf22),
pp. 1–6, IEEE, 2022.
[13] N. Sugavanam, S. Baskar, and E. Ertin, “High resolution mimo radar
sensing with compressive illuminations,” IEEE Transactions on Signal
Processing 70, pp. 1448–1463, 2022.
[14] T. Agarwal, N. Sugavanam, and E. Ertin, “Sparse signal models for data
augmentation in deep learning atr,” in 2020 IEEE Radar Conference
(RadarConf20), pp. 1–6, IEEE, 2020.
[15] T. Scarnati and J. R. Jamora, “Three-dimensional object reconstruction
from sparse multi-pass SAR data,” in Algorithms for Synthetic Aperture
Radar Imagery XXVIII, 11728, p. 117280I, International Society for
Optics and Photonics, 2021.
[16] J. A. Jackson and R. L. Moses, “Synthetic aperture radar 3d feature
extraction for arbitrary flight paths,” IEEE Transactions on Aerospace
and Electronic Systems 48(3), pp. 2065–2084, 2012.
[17] K. R. Varshney, M. Cetin, J. W. Fisher III, and A. S. Willsky, “Joint
image formation and anisotropy characterization in wide-angle SAR,”
Proc. SPIE Algorithms for Synthetic Aperture Radar Imagery XIII 6237,
pp. 62370D–62370D–12, 2006.
[18] M. Wang, S. Wei, J. Liang, Z. Zhou, Q. Qu, J. Shi, and X. Zhang, “Tpssinet: Fast and enhanced two-path iterative network for 3d sar sparse
imaging,” IEEE Transactions on Image Processing 30, pp. 7317–7332,
2021.
[19] J. Tachella, Y. Altmann, N. Mellado, A. McCarthy, R. Tobin, G. S.
Buller, J.-Y. Tourneret, and S. McLaughlin, “Real-time 3D reconstruction from single-photon LIDAR data using plug-and-play point cloud
denoisers,” Nature communications 10(1), pp. 1–6, 2019.
[20] G. Guennebaud and M. Gross, “Algebraic point set surfaces,” in ACM
siggraph 2007 papers, pp. 23–es, 2007.
[21] G. Guennebaud, M. Germann, and M. Gross, “Dynamic sampling and
rendering of algebraic point set surfaces,” in Computer Graphics Forum,
27(2), pp. 653–662, Wiley Online Library, 2008.
[22] H. Huang, S. Wu, M. Gong, D. Cohen-Or, U. Ascher, and H. Zhang,
“Edge-aware point set resampling,” ACM transactions on graphics
(TOG) 32(1), pp. 1–12, 2013.
[23] M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan,
U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng, “Fourier features let
networks learn high frequency functions in low dimensional domains,”
Advances in Neural Information Processing Systems 33, pp. 7537–7547,
2020.
[24] Z. Wang, S. Wu, W. Xie, M. Chen, and V. A. Prisacariu, “Nerf–
: Neural radiance fields without known camera parameters,” arXiv
preprint arXiv:2102.07064 , 2021.
[25] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger,
“Occupancy networks: Learning 3d reconstruction in function space,”
in Proceedings of the IEEE/CVF conference on computer vision and
pattern recognition, pp. 4460–4470, 2019.
[26] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove,
“DeepSDF: Learning continuous signed distance functions for shape
representation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 165–174, 2019.
[27] W. Huang, H. B. Li, J. Pan, G. Cruz, D. Rueckert, and K. Hammernik,
“Neural implicit k-space for binning-free non-cartesian cardiac mr
imaging,” in International Conference on Information Processing in
Medical Imaging, pp. 548–560, Springer, 2023.
[28] J. Feng, R. Feng, Q. Wu, Z. Zhang, Y. Zhang, and H. Wei, “Spatiotemporal implicit neural representation for unsupervised dynamic mri
reconstruction,” arXiv preprint arXiv:2301.00127 , 2022.
[29] L. Yariv, Y. Kasten, D. Moran, M. Galun, M. Atzmon, B. Ronen, and
Y. Lipman, “Multiview neural surface reconstruction by disentangling
geometry and appearance,” Advances in Neural Information Processing
Systems 33, pp. 2492–2502, 2020.
[30] A. Gropp, L. Yariv, N. Haim, M. Atzmon, and Y. Lipman, “Implicit geometric regularization for learning shapes,” in International Conference
on Machine Learning, pp. 3789–3799, PMLR, 2020.
[31] M. G. Crandall and P.-L. Lions, “Viscosity solutions of hamilton-jacobi
equations,” Transactions of the American mathematical society 277(1),
pp. 1–42, 1983.
[32] M. Atzmon, N. Haim, L. Yariv, O. Israelov, H. Maron, and Y. Lipman, “Controlling neural level sets,” Advances in Neural Information
Processing Systems 32, 2019.
[33] W. Yifan, S. Wu, C. Oztireli, and O. Sorkine-Hornung, “Iso-points:
Optimizing neural implicit surfaces with hybrid representations,” in
Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, pp. 374–383, 2021.
[34] L. Greengard and J.-Y. Lee, “Accelerating the nonuniform fast fourier
transform,” SIAM review 46(3), pp. 443–454, 2004.
[35] K. E. Dungan, C. Austin, J. Nehrbass, and L. C. Potter, “Civilian vehicle
radar data domes,” in Algorithms for synthetic aperture radar Imagery
XVII, 7699, p. 76990P, International Society for Optics and Photonics,
2010.
1079
Authorized licensed use limited to: Univ de Alcala. Downloaded on October 23,2024 at 21:32:00 UTC from IEEE Xplore. Restrictions apply.