Integration Of Scientific Knowledge And Machine Learning
3 main points
✔️ A review of integrated models that compensate for the shortcomings of scientific and machine learning models and produce synergies
✔️ A number of models have been shown to reduce computational load and improve accuracy against physical simulators
✔️ Rapidly developing recently, still leaves room for growth
Integrating Scientific Knowledge with Machine Learning for Engineering and Environmental Systems
written by Jared Willard, Xiaowei Jia, Shaoming Xu, Michael Steinbach, Vipin Kumar
(Submitted on 10 Mar 2020 (v1), last revised 23 Jul 2021 (this version, v5))
Comments: Accepted by ACM Computing Surveys.
Subjects: Computational Physics (physics.comp-ph); Machine Learning (cs.LG); Machine Learning (stat.ML)
The images used in this article are from the paper or created based on it.
first of all
When trying to apply machine learning to the field of scientific modeling, it has been less successful than in other fields such as image, natural language, and speech. This is because they require huge amounts of data, it is difficult to produce physically consistent results, and they cannot be generalized outside of sample scenarios. Therefore, research has begun to explore and synergistically integrate the continuum of scientific knowledge and ML models. Unlike the traditional way of applying machine learning domain knowledge to feature value engineering and preprocessing, we integrate scientific knowledge directly into the ML framework. Workshops and symposia dealing with this area have already started. (See references [1-6].) In this review paper, we first introduce a classification by purpose, followed by a description of the different integration methods.
Application perspective objectives of physics/machine learning integration
Fig. 1 is part of an abstract representation of a generic science problem. Taking the variables xt and the constant s as input, the mechanistic model F yields the output yt.
We will go through each objective in Table 1.
Replacing and improving the physical model SOTA
Although scientific models based on physical laws are widely used, not everything is known about the actual process and the model is an approximation. In addition, the models contain many parameters, and since the exact values cannot be observed, estimates are often substituted. On the other hand, ML models may be used to outperform physics-based models of many laws. This is because NNs can extract complex problem structures and patterns that cannot be explicitly expressed.
Downscaling methods are used in cases where physical variables need to be modeled with finer resolution but are difficult to do so due to high computational load. There are two categories: statistical downscaling and dynamic downscaling. The former is an empirical model that predicts fine-resolution variables from coarse-resolution variables. It is traditionally difficult because it requires solving for complex nonlinearities, but it is showing promise in NN. The latter is used to dynamically simulate the relevant physical processes in a region where high-resolution, domain-specific simulations are required. It is still computationally expensive, but it is expected to be mitigated by ML. For both of them, the latest MLshuhou can be applied, but the issues are whether the learned ML part is consistent with the established physical laws and whether the overall simulation performance is improved.
In order to fit physical phenomena that cannot be captured by complex physical models, parameterization is often used. Complex dynamic processes are replaced by simplified physical approximations represented by static parameters. A common approach is to use a grid search to find the optimal value. Another approach is to replace it with a dynamic or static ML process. This has already been done successfully in several areas. The main advantage is the reduction of computational time compared to traditional simulations.
Currently, we use the standard black-box parameterized ML, but there is interest in integrating physical and ML models. This is because it is expected to provide robustness, generalization performance, and training data reduction.
Reduced-form models (ROMs) are computationally inexpensive representations of complex models. ML is beginning to help construct ROMs with greater accuracy and lower computational cost. ML is beginning to help us construct ROMs that are more accurate and less computationally expensive: one is an ML-based proxy model. Others are ML surrogate models for already existing ROMs, or ML models that map dimensionality reduction from a full-dimensional model to a reduced dimensional model. Model application has the potential to significantly extend the performance of ROM.
One area of recent focus is the approximation of the fundamental mode of the Koopman (or composite) operator as a method of dimensionality reduction; the Koopman operator is an infinite-dimensional linear operator that encodes the time convolution of system states through nonlinear dynamics [41 ]. This allows us to apply linear analytic methods to nonlinear systems, and to infer properties of dynamic systems that are too complex to be represented by traditional analytic methods. Approximating Kooper operator embeddings with deep learning. Adding physics-based knowledge to the training of Koopman operators has the potential to broaden their generalization and explanatory power.
partial differential equation
For many physical systems, even if the governing equations are known, general finite element and finite difference methods for solving partial differential equations can be very expensive; using ML models, especially NN solvers, can significantly reduce the computational burden, while at the same time the solutions are differentiable and have a closed The solution is differentiable and has a closed analytical form that can be transferred to any subsequent calculation. It has been used successfully for quantum many-body problems and the many-electron Schrodinger equation. Recently, Li et al. defined a neural Fourier operator that allows an NN to learn a whole family of partial differential equations, mapping any functional parameter dependence to a solution in Fourier space.
The inverse model uses the (potentially noisy) output of the system to estimate the true physical parameters and inputs. Inverse problems are often important in the physics-based modeling community because they have the potential to shed light on valuable information that cannot be directly observed. An example is the use of x-ray images to generate 3D images that reflect the structure of the human body from CT scans.
In many cases, solving the inverse problem is computationally expensive, as the posterior distribution of physical parameters requires millions of forwarding model evaluations and feature extraction. ML-based reduced models are becoming a realistic option because they model high-dimensional phenomena with large amounts of data and are much faster than physical simulators.
In addition to computer tomography, seismic data processing, etc., there has been a lot of interest in the inverse design of materials. It takes as input the desired physical properties and uses models to determine the atomic and microscale structures that possess those properties .