Loading...
Accelerating Physics-Informed Neural Network Training Time for Fluid Dynamic Applications
Jahani Nasab, Mahyar | 2024
0
Viewed
- Type of Document: M.Sc. Thesis
- Language: Farsi
- Document No: 57770 (08)
- University: Sharif University of Technology
- Department: Mechanical Engineering
- Advisor(s): Bijarchi, Mohammad Ali
- Abstract:
- This research introduces an accelerated training approach for Vanilla Physics-Informed Neural Networks (PINNs) that addresses three factors affecting the loss function: the initial weight state of the neural network, the ratio of domain to boundary points, and the loss weighting factor. The proposed method involves two phases. In the initial phase, a unique loss function is created using a subset of boundary conditions and partial differential equation terms. Furthermore, we introduce preprocessing procedures that aim to decrease the variance during initialization and choose domain points according to the initial weight state of various neural networks. The second phase resembles Vanilla-PINN training, but a portion of the random weights are substituted with weights from the first phase. This implies that the neural network’s structure is designed to prioritize the boundary conditions, subsequently affecting the overall convergence. The study evaluates the method using three benchmarks: two-dimensional flow over a cylinder, an inverse problem of inlet velocity determination, and the Burger equation. Incorporating weights generated in the first training phase neutralizes imbalance effects. Notably, the proposed approach outperforms Vanilla-PINN in terms of speed, convergence likelihood, and eliminates the need for hyperparameter tuning to balance the loss function. Although the convergence probability of these algorithms is improved with this training methodology, they still struggle with transient problems. Solving unsteady partial differential equations (PDEs) using recurrent neural networks (RNNs) typically requires numerical derivatives between each block of the RNN to form the physics-informed loss function. However, this introduces the complexities of numerical derivatives into the training process of these models. In this study, we propose modifying the structure of the traditional RNN to enable the prediction of each block over a time interval, making it possible to calculate the derivative of the output with respect to time using the backpropagation algorithm. To achieve this, the time intervals of these blocks are overlapped, defining a mutual loss function between them. Additionally, the employment of conditional hidden states enables us to achieve a unique solution for each block. The forget factor is utilized to control the influence of the conditional hidden state on the prediction of the subsequent block. This new model, termed the Mutual Interval RNN (MI-RNN), is applied to solve three different benchmarks: the Burgers equation, unsteady heat conduction in an irregular domain, and the Green vortex problem. Our results demonstrate that MI-RNN can find the exact solution more accurately compared to existing RNN models. For instance, in the second problem, MI-RNN achieved one order of magnitude less relative error compared to the RNN model with numerical derivatives
- Keywords:
- Artificial Intelligence ; Fluid Mechanics ; Physics Informed Neural Network ; Neural Networks
-
محتواي کتاب
- view