EIM
The Empirical Interpolation Method (EIM) can be combined with POD-Galerkin to derive an affine parameter decomposition when the PDE depends nonlinearly on the parameters.
Indeed, as explained in the Galerkin-POD section (here), an affine decomposition with respect to the parameter is needed in order to assemble, during the offline stage, the parameter-independent matrices involved in the problem, thereby making the online computations more efficient.
In the Galerkin-POD model problem, the reduced matrices $A1$ and $B1$ are computed offline, and their dependence on $\mu$ is handled online through scalar multiplication. The stationary Navier–Stokes equations are nonlinear, so a fixed-point iteration is required to solve them, which makes the formulation of the reduced problem less straightforward, but let us focus on $B1$, defined by
$$B1_{ij}=(\nabla \Phi_j,\nabla \Phi_i),$$we observe that
$$B1=P^T \ B1^{\mathcal{N}} P, $$where ${P}$ is the matrix of the modes $[\Phi_1,\dots,\Phi_N] \in \mathbb{R}^{N\times \mathcal{N}}$, i.e. $(P_{ij})_{j=1,\dots,\mathcal{N}}$ are the discretized coefficients of $\Phi_i$ in $V_h$, $P=(P_{ij}) \in \mathbb{R}^{ \mathcal{N} \times N},$ and $B1^{\mathcal{N}}$ is the corresponding high-fidelity discretized matrix.
Thus, $B1^{\mathcal{N}}$ must first be assembled offline, which is only possible if its dependence on the parameter is affine, that is,
$$ \begin{equation} B1^{\mathcal{N}}=\sum_q \theta_q(\mu) \ \ B1^{\mathcal{N}}_q, \end{equation} $$where each matrix $B1^{\mathcal{N}}_q$ is independent of $\mu$.
If an affine dependence is not available, the EIM can be employed. It approximates the high-fidelity matrix (or more generally the nonlinear high fidelity part) by a linear combination as in (1). More precisely, the method builds an approximation using basis functions $(q_i)_{i=1,...,M}$ together with interpolating points (called “magic points”) and some values $(\alpha_i)_{i=1,...,M}$ relying on certain instances of the parameter $\mu$, selected within the algorithm. Let us introduce the method with the following example:
Consider a function
$$ g(x,\mu)= \frac1{\sqrt{(x_1-\mu_1)^2+(y_1-\mu_2)}},$$with $x=(x_1,x_2)$ and $\mu=(\mu_1,\mu_2)$.
We aim to approximate $g$ by an interpolation
$$g_M(x;\mu):=\mathcal{I}_M[g](x;\mu)=\sum_{j=1}^M \alpha_j(\mu) q_j(x),$$where $\mathcal{I}_M$ is an interpolation operator, $M$ is the number of selected “magic” points. Interpolation conditions are imposed at these selected points
$$ \begin{equation} \boxed{x_1,\dots,x_M\in\Omega:\quad g(x_n;\mu)=\mathcal{I}_M(g)(x_n;\mu),\quad n=1,\dots,M.} \end{equation} $$$(q_1,\dots,q_M)$ is the set of EIM basis functions, and $\alpha_m(\mu)$ are parameter-dependent coefficients.
Offline consists in
- finding the magic points $x_1,\dots x_M$,
- generating the basis functions $q_j$ and assembling them into a matrix $\mathbf{Q}_M$,
And then, the online stage for a new parameter value $\mu$ only requires:
- evaluating $g(x_n;\mu)$ at the $M$ magic points to get $\mathbf{G}(\mu)=(g(x_n,\mu))_{n=1,\dots,M}$
- solving a small $M\times M$ linear system to get the coefficients $\alpha(\mu)$,
- forming
EIM builds $(q_j,x_j)$ iteratively using a greedy residual criterion. We now describe in detail the offline and the online stages.
“OFFLINE”
- The first chosen parameter $\mu^1$ is the one that maximizes $g$ on norm $\mathcal{L}^{\infty}$ (we want to retrieve most information on $g$ with few points) and the associated magic point $x^1$ is the point that gives the most information on $g(\cdot,\mu^1)$ i.e. which maximizes its modulus.
Then the first basis function is $q_1(\cdot) = \frac{g(\cdot,\mu^1)}{g(x^1, \mu^1)}$. We can then find $\alpha_1$ as the coefficient corresponding to this basis function:
For each parameter $\mu \in \mathcal{G}$,
$\quad \quad \quad \quad \boxed{\mathcal{I}_1[g](\cdot;\mu)=\alpha_1(\mu)\,q_1(\cdot)}$
such that $\mathcal{I}_1[g](x_1;\mu)=g(x_1;\mu)$ from imposed conditions (2).
Thus, since $q_1(x_1)=1$ from the normalization, we get $\alpha_1(\mu)=g(x_1;\mu),$ - For $\mathcal{I}_2$, we seek the second magic point.
We define the residual $r(x;\mu)=g(x;\mu) - \mathcal{I}_1[g](x;\mu)$, and we find $\mu_2$ the parameter that maximizes $\|r(\cdot;\mu)\|_{\mathcal{L}^{\infty}}$.
Then, $x_2:= \mathrm{arg max}_{x \in \Omega}|r(x;\mu_2)|$.
The second basis function is $q_2(\cdot) = \frac{r(\cdot,\mu^2)}{r(x^2, \mu^2)}$.
We now seek $\alpha_2$:
For each parameter $\mu \in \mathcal{G}$, we have
$\quad \quad \quad \boxed{\mathcal{I}_2[g](\cdot;\mu)=\alpha_1(\mu)\,q_1(\cdot)+\alpha_2(\mu)\,q_2(\cdot)}$
such that $\mathcal{I}_2[g](x_i;\mu)=g(x_i;\mu), \quad i=1,2$ from (2). Thus,
where $q_2(x_1)=0,\quad q_2(x_2)=1.$ Thus,
$$\alpha_1(\mu)=A(x_1;\mu), \textrm{ and }\alpha_2(\mu) =A(x_2;\mu)-A(x_1;\mu)\,q_1(x_2).$$- The first chosen parameter $\mu^1$ is the one that maximizes $g$ on norm $\mathcal{L}^{\infty}$ (we want to retrieve most information on $g$ with few points) and the associated magic point $x^1$ is the point that gives the most information on $g(\cdot,\mu^1)$ i.e. which maximizes its modulus.
And so on …
We find recursively the $M$ magic points, basis functions and coefficients with the following interpolation problem
$$ \forall 1 \leq i \leq M-1,\ \mathcal{I}_{M-1}[g](x^i)= g(x^i),$$$$\mathcal{I}_{M-1}[g]=\sum_{j=1}^{M-1} \alpha_j^{M-1} q_j.$$" ONLINE "
Now we are interested by a new parameter $\mu^{target}$. To obtain the linear decomposition, we solve a reduced problem from the previously computed basis functions and with the magic points (see also details in the Python notebook).
$$ g^M(x,\mu^{target})=\sum_{i=1}^M \alpha(\mu^{target}) q(x). $$There exists a generalized form of this method (GEIM) and a discrete version named DEIM. The GEIM replaces the $M$ pointwise evaluations used by the EIM by general measures. In the presence of measurement noise, a stabilization of the method can be employed.