数学理论与应用 ›› 2023, Vol. 43 ›› Issue (2): 16-31.doi: 10.3969/j.issn.1006­8074.2023.02.002

• • 上一篇    下一篇

基于函数值再生核希尔伯特空间的偏微分方程神经求解算子

包凯君, 刘子源, 王海峰, 钱旭*, 宋松和
  

  1. 国防科技大学数学系, 长沙, 410073
  • 出版日期:2023-06-28 发布日期:2023-06-27

Neural Solution Operator of PDEs Based on Function­valued Reproducing Kernel Hilbert Space

Bao Kaijun, Liu Ziyuan, Wang Haifeng, Qian Xu*, Song Songhe   

  1. Department of Mathematics, National University of Defense Technology, Changsha 410073, China
  • Online:2023-06-28 Published:2023-06-27
  • Contact: Qian Xu (1985–), Associate Professor, PhD; E­mail: qianxu@nudt.edu.cn E-mail: qianxu@nudt.edu.cn
  • Supported by:
    This work is supported by the National Key R&D Program of China (No. 2020YFA0709800), the National Natural Science Foundation
    of China (Nos. 11901577, 11971481, 12071481), the Natural Science Foundation of Hunan (Nos. 2021JJ20053, 2020JJ5652), the
    Science and Technology Innovation Program of Hunan Province (No. 2021RC3082), and the Defense Science Foundation of China
    (No. 2021­JCJQ­JJ­0538)

摘要:

通过精心设计神经网络结构来学习无穷维函数空间之间的映射, 算子学习方法——神经算子, 相较于传统方法在求解偏微分方程等复杂问题上展现出极高的效率. 为此, 本文结合函数值再生核希尔伯特空间,提出一种新型的神经算子——再生核神经算子 (RKNO). 受到最近优秀的算子学习方法——深度算子网络 (DeepONet) 的启发,RKNO 通过推广希尔伯特­施密特积分算子和表示定理而实现. 在 Advection, KdV,
Burgers 和 Poisson 方程上的数值实验表明, 与 DeepONet 和其他模型相比, RKNO 具有更易于表达和高效的结构. 此外, RKNO 还显示出与离散化无关的性质, 可以在低分辨率数据训练后, 找到高分辨率输入后的解.

关键词:

Abstract:

By learning the mappings between infinite dimensional function spaces using carefully designed neural networks, the operator learning methodology–neural operator has exhibited significantly more efficient than traditional methods in solving complex problems such as differential equations. Toward this end, we incorporate the function­valued reproducing kernel Hilbert spaces (function­valued RKHS) and propose a novel neural operator–reproducing kernel neural operator (RKNO). Motivated by the recently successful operator learning
methodology–deep operator network (DeepONet), RKNO is formulated by generalizing the Hilbert­Schmidt integral operator and the representer theorem. Numerical experiments on the Advection, KdV, Burgers’, and Poisson equations show that the RKNO allows for an expressive and efficient architecture, in contrast to DeepONet and other models. Futhermore, the RKNO possesses the property of discretization­independence, which can find the solution of a high­resolution input after learning from low­resolution data.

Key words: