# Dr. Carlos N. Rautenberg

## Optimal Sensor Placement

 The optimal placement of sensors (such as thermostats) is usually a complex multiscale problem where multiple nonlinear processes are coupled. Here we assume $\Omega$ is a room or building where sensors are placed and a ventilation system is in place with an outlet and an inlet; see Fig. 1. The temperature of the room is determined by $u$ such $u:\Omega\to\mathbb{R}^n$ , and it is advected by a the velocity profile of the air $\vec{v}:\Omega\to\mathbb{R}^{n\times n}$. Fig.1-Room sample.
 Fig.2 - Isosurfaces for $x\mapsto \int_0^1 \mathbb{E}\|u-\tilde{u}_x\|_{L^2}^2 dt$ . Fig.3 - Isosurface of $x\mapsto \int_0^1 \mathbb{E}\|u-\tilde{u}_x\|_{L^2}^2 dt$ in a non-convex problem. The convection-diffusion process is perturbed by an stochastic (Wiener) process $\eta$. We measure $u$ with a sensor in location $x$ by a weighted average within an effective range and obtain an output $h_x$ which is also perturbed by an stochastic process $\nu$. The system satisfied by this variables is given by they are \begin{align*} u'&=\alpha\Delta u+v\cdot\nabla u+ \eta;\\ h_x&=C(x)u+\nu; \end{align*} where the sensor action is described by \begin{align*} C(x)w=\int_{\Omega}K(y-x)w(y) dy, \end{align*} where $K$ is some function obtained from properties of the sensor. A good criteria for good a sensor is placed is obtained by trying to minimize the expected value of $|u-\tilde{u}_x|^2_{L^2(\Omega)}$ where $\tilde{u}_x$ is the output of the Kalman Filter. For simple problems, where $\vec{v}$ is considered constant, isosurfaces for \begin{align*} x\mapsto J(x):=\int_0^1 \mathbb{E}|u-\tilde{u}_x|_{L^2}^2 dt \end{align*} seem to be associated to a convex problem (see Figure 2). In cases where the kernel function $K$ is non-smooth and/or $\vec{v}$ is not constant and it is the solution to Navier-Stokes on the domain of interest (+ appropriate boundary conditions), $x\mapsto \int_0^1 \mathbb{E}|u-\tilde{u}_x|_{L^2}^2 dt$ does not longer seem to come from a convex problem as shown in Figure 3.

## Quasi-variational Inequalities

Quasi-variational Inequalities (QVIs) were introduced by Lions and Bensoussan in impulse control problems. They arise as mathematical models of various phenomena in the, for instance, game theory, solid and continuum mechanics or superconductivity.

A general model is given by the following: Let $V$ be a Banach space, $A:V\to V'$ some (possibly nonlinear) given map, $f\in V'$. Additionally, let the map $\mathbf{K}:V\to 2^{V}$ be given and assume each evaluation $\mathbf{K}(v)$ is closed, convex and non-empty in $V$. A Quasi-Variational Inequality Problem is given by the following: $$\label{eq:QVI}\tag{\mathrm{P_{QVI}}} \text{Find } y\in \mathbf{K}{(y)}: \langle A(y)-f,v-y\rangle_{V',V} \geq 0, \quad \forall v\in \mathbf{K}{(y)}.$$ In function-space applications, $V$ is in general a sobolev space such as $W^{1,p}(\Omega)$ and two of the most popular choices for $\mathbf{K}:V\to 2^{V}$ are induced by

• Gradient-type constraints, so that \begin{equation*} \mathbf{K}_{\text{grad}}(y):=\{w\in W^{1,p}(\Omega): |\nabla w(x)|\leq \Theta(y)(x)\quad \text{a.e. }x\in\Omega\}, \end{equation*} for some $\Theta:W^{1,p}(\Omega)\to L^q(\Omega)$.
• Obstacle-type constraints, so that \begin{equation*} \mathbf{K}(y):=\{w\in W^{1,p}(\Omega): \Phi_1(y)(x)\leq w(x)\leq \Phi_2(y)(x)\:\: \text{ a.e. }x\in\Omega\}, \end{equation*} for some $\Phi_i:W^{1,p}(\Omega)\to L^q(\Omega)$.

## Total variation regularization

Image denoising can be succesfully tackled via highly nonlinear regularization like total variation. Here, given real noisy data $f=u_{true}+\eta\in L^2(\Omega)$, where $u_{true}$ is a real image (recovey target) and $\eta$ is noise with the following properties: $\int_{\Omega}\eta=0$ and $\int_{\Omega}|\eta|^2=\sigma^2.$ Consider $\alpha>0$, the TV model reads: $$\label{eq:TVROF}\tag{TV} \min_{u\in BV(\Omega)} \:\frac{1}{2}\int_{\Omega} |u-f|^2+\alpha \int_\Omega |\mathcal{D}u|,$$ where $\int_\Omega |\mathcal{D}u|:=|\mathcal{D}u|(\Omega)$, the total mass of the Borel measure $\mathcal{D}u$ determined by the distributional gradient of $u$: \begin{align*} \int_{\Omega} |\mathcal{D} u| = \sup\left\{\int_{\Omega} u \:\text{div} \mathbf{v} \mathrm{d}x \left|\: \mathbf{v} \in C_c^{1}(\Omega;\mathbb{R}^2), \: |\mathbf{v}(\mathbf{x})|_{\infty}\leq 1 \: \text{a.e.} \: \mathbf{x} \in \Omega \right.\right\}. \end{align*} The effects on solution to the problem for increasing and decreasing $\alpha$ are understood: For $\alpha$ high, the reconstruction contains no noise but also details in $u_{true}$ are lost. For $\alpha$ small, details for $u_{true}$ are retained but also (possibly) noise. This is shown in the following images.

 Figure 1: Noisy image $f=u_{true}+\eta$. Figure 2: $u_{true}$ and zooming regions.

 Figure 3: lines in $u$ fors small $\alpha$. Figure 4: corner in $u$ for small $\alpha$. Figure 5: lines in $u$ for high $\alpha$. Figure 6: corner in $u$ for high $\alpha$.

Further, in order to localize the effects of $\alpha$, it is appropriate to consider it as locally variant, that is, to cosider a generalization of \eqref{eq:TVROF} of the form $$\label{eq:GTVROF}\tag{GTV} \min_{u\in BV(\Omega)} \:\frac{1}{2}\int_{\Omega} |u-f|^2+ \int_\Omega \alpha(x)|\mathcal{D}u|.$$ However, $\alpha:\Omega\to\mathbb{R}$ generate a number of theoretical obstacles and difficulties in the derivation of algorithms.

## Shape Optimization

Under construction!