Subject Area
Mathematics, Applied
Abstract
Solving high-dimensional partial differential equations (PDEs) is a fundamental challenge in scientific computing, with applications ranging from quantum chemistry and computational finance to statistical physics and stochastic optimal control. Classical numerical methods such as finite element or finite difference schemes suffer from the curse of dimensionality, rendering them computationally infeasible when the dimension $d$ exceeds a handful. Physics-informed neural network (PINN) methods alleviate this by embedding the PDE residual directly into a loss function, but they require computing derivatives of the network with respect to its spatial inputs---an operation that scales poorly in high dimensions and demands that the approximate solution be sufficiently smooth. This dissertation develops two complementary \emph{stochastic, derivative-free} deep learning frameworks for high-dimensional PDEs, in which the network is never differentiated with respect to the spatial variable. Both methods replace the pointwise strong-form residual with a statistically natural, expectation-based training objective that can be estimated by Monte Carlo sampling. The first framework is \emph{DeepMartNet}, a martingale-based deep neural network method for Dirichlet boundary value problems and eigenvalue problems of elliptic PDEs in $\R^d$. Grounded in Varadhan's martingale problem, DeepMartNet trains a network by enforcing a conditional-expectation constraint along It\^{o} diffusion paths---completely bypassing spatial differentiation of the network. A single set of SDE trajectories from one starting point yields a global approximation of the solution over the entire domain, a substantial advance over point-by-point Feynman--Kac methods. Numerical experiments validate the approach on the Poisson--Boltzmann equation and on eigenvalue problems of the Laplace and Fokker--Planck operators in dimensions up to $d = 100$. The second framework is a \emph{Weak Adversarial Neural Pushforward Mapping} (WANPM) Sampler for steady and transient distributions governed by Fokker--Planck equations. Rather than representing the probability density pointwise---which would require differentiating the network---we parametrize the solution distribution implicitly through a neural pushforward map that transforms a base distribution into the target. The training objective derives from the distributional (weak) form of the Fokker--Planck equation, in which all differential operators act on analytically tractable plane-wave test functions, not on the network. The resulting minimax problem is entirely mesh-free, automatically conserves probability, and extends naturally to singular initial data, Riemannian manifolds, and L\'{e}vy--Fokker--Planck equations with fractional diffusion.
Degree Date
Spring 5-16-2026
Document Type
Dissertation
Degree Name
Ph.D.
Department
Department of Mathematics
Advisor
Wei Cai
Number of Pages
180
Format
Creative Commons License

This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License
Recommended Citation
He, Qing Mr., "Stochastic Derivative-Free Deep Learning Methods for Solving High Dimensional Partial Differential Equations" (2026). Mathematics Theses and Dissertations. 31.
https://scholar.smu.edu/hum_sci_mathematics_etds/31
