POD-Interpolation

POD-Interpolation

The PODI (POD+Interpolation) method is a non-intrusive version of the Galerkin-POD.

The offline part that creates the $N$ reduced basis functions remains the same.

Then, a further step is added. It consists in computing the reduced coefficients for all snapshots within a training set of parameters $\mathcal{G}_{Ntrain} \subset \mathcal{G}$.

We denote by $\alpha_i(\mu_k),i = 1,...,N, k = 1,...,Ntrain$ these coefficients. We obtain $Ntrain$ pairs $(\mu_k,\alpha (\mu_k))$, where $\alpha(\mu_k) \in \mathbb{R}^N$. Thanks to an interpolation/regression, the function that maps the input parameters $\mu_k$ to the coefficients can be reconstructed. This function is then used during the online stage to find the interpolated new coefficients for a new given parameter $\mu \in \mathcal{G}$ and to approach the high-dimensional solution. Different methods of interpolation might be employed. A prior sensitivity analysis of the function of interest with respect to the parameters can also be done to enhance the results. This preprocessing phase corresponds to what is called “the active subspaces property”.

Offline

A POD procedure (visit the offline part of the POD-Galerkin here

Online

An interpolation of the reduced coefficients

Codes:

Python jupyter notebook

Code with Python - Library MORDICUS

References