Partial differential equations (PDEs) play a prominent role in modeling complex systems in science and engineering and understanding how they change over time and in space. Solving most PDEs of importance is analytically intractable and necessitates falling back on numerical approximation schemes. Recently, there have been pushes to build neural-numerical hybrid solvers, which carries the modern trend toward fully end-to-end learned systems. In this talk, we propose a novel Bayesian variational autoencoder (VAE) framework for solving forward and inverse problems for PDEs. One of the major contributions of our work is the introduction of adaptive Fourier decomposition (AFD) for probabilistic decoder. This AFD-enhanced BVAE framework consists of a probabilistic encoder, latent-to-kernel neural networks, and convolutional neural networks. The probabilisitc encoder maps the input data to a latent space. To preserve useful mathematical properties and physical insights, we restrict the latent space to its reproducing kernel Hilbert space (RKHS) via the use of latent-to-kernel neural networks. The AFD-based convolutional neural networks are then applied to the resulting RKHS as decoder. These neural networks are trained end to end. Compared to conventional BVAE and existing neural PDE solvers, this AFD-enhanced BVAE framework not only possesses several theoretical properties, but also achieves superior computationally efficiency. We demonstrate this by validating our framework in solving Burgers' equation and the Richard's equation as well as their inverse problems.