This homework problem presents a nonlinear problem taken from a
simplified industrial process monitor. All of the necessary models and
Jacobians are described below. The task is to write functions for these
models and use the `*kf`

engine or framework functions (`kff`

, `kffsim`

,
and `kffmct`

) to create an extended Kalman filter, simulate it, and run a
Monte-Carlo test.

As you go along, it may be helpful to look at other examples, like the pendulum problem, Mr. Eustace's jump, or homework 1 and be sure to reference the online documentation frequently:

http://www.anuncommonlab.com/doc/starkf/

When you're done, or if you get stuck, you can compare your results with some potential solutions:

```
skfexample('hw2ans1'); % The frameworks method
skfexample('hw2ans2'); % The engine method
```

These are many ways to solve this problem, and your approach need not match the above in implementation. In performance, however, a good filter should be pretty close.

The model we'll use is a machine that places tiny drops of liquid on a surface with three nozzles. The nozzles are each machined very carefully, but with each application of liquid, impurities in the liquid cause a small amount of gunk to build up inside the nozzles, and this build-up grows and diminishes over time. The build-up then affects how much liquid comes out at each application of droplets.

Fortunately, there's a small optical sensor in this machine that quickly reads the size of the applied liquid drops, and this measurement can be used to estimate the current amount of build-up in each nozzle.

We'll have three states, representing the effective size of each nozzle relative to some nominal condition, and we'll have three measurements, representing the size of the applied drops. All of the quantities are nondimensionalized, so we won't have to worry about units here.

The build-up varies such that, when there's more gunk, the fluctuations in gunk build-up are larger; it may increase or decrease more dramatically. However, over time, it's generally likely that the gunk will clear out, so there's a tendency for the gunk levels to return to some nominal state.

Let the differences between the current effective nozzle size and a nominal nozzle size be \(x_1\), \(x_2\), and \(x_3\), so the state vector is: $$x = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}$$

The state changes with each application of drops from the nozzles, and so we will model it as a discrete system. $$x_{k} = f(x_{k-1}, u_{k-1}) + q_{k-1}$$

where \(f\) is the propagation function, \(u_{k-1}\) is an input generalized force and \(q_{k-1} \sim N(0, Q_{k-1})\) is the Gaussian process noise.

The propagation function for the ith nozzle looks like this: $$f_i(x) = x_i - p(x) \frac{x_i}{x_i + 1}$$

where \(p\) is a generalized pressure term. By inspection, we can see that, for a given pressure, a positive \(x_i\) will produce a negative change in \(x_i\). A negative \(x_i\) will produce a larger, positive change in \(x_i\). This all serves to return \(x_i\) to zero, its nominal condition. It's not important to memorize this stuff, but it may help to consider the system intuitively while writing and checking code.

The generalized pressure term also depends on the state and is given by: $$p(x) = \frac{a u}{x_1 + x_2 + x_3 + 3}$$

Upon inspective, this is somewhat intuitive too. When the effective nozzle sizes are all larger than normal (\(x_1\), \(x_2\), and \(x_3\) are positive), then there's less pressure behind the nozzles. This pressure term will effect almost everything in the process.

We said above that more gunk build-up engenders increased variation in gunk build-up. This is captured in the process noise covariance matrix, which has a constant part and a part that depends on the current state. The ith process noise variance is given by: $$q_{i,k-1} = \left(\frac{1}{100}\right)^2 + \left(\frac{x_i}{10}\right)^2$$

There are no covariance terms, so the process noise covariance matrix for sample \(k-1\) is: $$Q_{k-1} = \begin{bmatrix} q_{1, k-1} & 0 & 0 \\ 0 & q_{2, k-1} & 0 \\ 0 & 0 & q_{3, k-1} \\ \end{bmatrix}$$

Calculating the Jacobian of the propagation function is usually one of the messier parts of constructing a Kalman filter. We will make sure that this derivation keeps the many little parts clear and organized to minimize mistakes.

Let's first consider the first element of \(f\), \(f_1\): $$f_1(x) = x_1 - p(x) \frac{x_1}{x_1 + 1}$$

Let's differentiate this by \(x_i\) using the product rule: $$\frac{\partial}{\partial x_i} f_1(x) = \frac{\partial}{\partial x_i} x_1 - \frac{\partial p(x)}{\partial x_i} \frac{x_1}{x_1 + 1} - p(x) \frac{\partial}{\partial x_i} \frac{x_1}{x_1 + 1} $$

Looking at the first term, it's clear that \(\frac{\partial}{\partial x_i} x_1\) is 1 when \(i = 1\) and zero otherwise. That's easy enough.

Next, let's look at the difference of the pressure term: $$\frac{\partial}{\partial x_i} p(x) = \frac{\partial}{\partial x_i} \frac{a u}{x_1 + x_2 + x_3 + 3} = \frac{- a u}{\left(x_1 + x_2 + x_3 + 3\right)^2}$$

In fact, we can substitute \(p(x)\) back into this for simplification: $$\frac{\partial p(x)}{\partial x_i} = \frac{-p(x)}{x_1 + x_2 + x_3 + 3}$$

As we can see, it doesn't matter *which* state we differentiate by;
they'll all yield the same result. This will be useful for a clean
implementation.

Returning to (8), the third partial difference term is: $$\frac{\partial}{\partial x_i} \frac{x_1}{x_1 + 1}$$

When \(i\) is 1, this simplifies, again via the product rule, to: $$\frac{\partial}{\partial x_1} \frac{x_1}{x_1 + 1} = \frac{1}{\left(x_1 + 1\right)^2}$$

When \(i\) is anything other than 1, then (11) is of course 0.

We can now reconstruct (8) for the case that \(i\) is 1: $$\frac{\partial}{\partial x_1} f_1(x) = 1 - \frac{\partial p(x)}{\partial x_1} \frac{x_1}{x_1 + 1} - \frac{p(x)}{\left(x_1 + 1\right)^2}$$

and for the case that \(i\) is 2 or 3: $$ \frac{\partial}{\partial x_2} f_1(x) = \frac{\partial}{\partial x_3} f_1(x) = -\frac{\partial p(x)}{\partial x_i} \frac{x_1}{x_1 + 1}$$

We've completed the differentiation of \(f_1\), giving us the top row of the Jacobian of \(f\) wrt \(x\), \(F(x)\). The remaining rows have exactly the same form. We can express the entire matrix as: $$F(x) = \frac{\partial}{\partial x} f(x) = \begin{bmatrix} \frac{\partial}{\partial x_1} f_1(x) & -\frac{\partial p(x)}{\partial x_i} \frac{x_1}{x_1 + 1} & -\frac{\partial p(x)}{\partial x_i} \frac{x_1}{x_1 + 1} \\ -\frac{\partial p(x)}{\partial x_i} \frac{x_2}{x_2 + 1} & \frac{\partial}{\partial x_2} f_2(x) & -\frac{\partial p(x)}{\partial x_i} \frac{x_2}{x_2 + 1} \\ -\frac{\partial p(x)}{\partial x_i} \frac{x_3}{x_3 + 1} & -\frac{\partial p(x)}{\partial x_i} \frac{x_3}{x_3 + 1} & \frac{\partial}{\partial x_3} f_3(x) \end{bmatrix}$$

When implementing this, note that \(\partial p(x) / \partial x_i\) is the same regardless of \(i\), so this term can be computed first and used throughout.

That was the messiest part of the derivations we need.

The observation function measures the _difference) in the size of the drop from nominal. This too uses \(p(x)\) and simply maps the state as follows: $$h(x) = p(x) x$$

That is, the pressure times the extra nozzle size results in extra droplet size.

The optical sensor has zero mean error and a standard deviation of 0.003 (again, unitless). The covariance matrix is therefore simply: $$R = 0.003^2 \underset{3x3}{1}$$

where \(\underset{3x3}{1}\) is an identity matrix.

The partial difference of \(h_1(x)\) wrt \(x_i\) is: $$\frac{\partial}{\partial x_i} h_1(x) = \frac{\partial p(x)}{\partial x_i} x_1 + p(x) \frac{\partial}{\partial x_i} x_1$$

When \(i = 1\), expanding with (10): $$\frac{\partial}{\partial x_i} h_1(x) = \frac{-p(x) x_1}{x_1 + x_2 + x_3 + 3} + p(x)$$

Grouping like terms: $$\begin{aligned} \frac{\partial}{\partial x_i} h_1(x) &= p(x) \left( \frac{-x_1}{x_1 + x_2 + x_3 + 3} + 1 \right) \\ &= p(x) \frac{x_2 + x_3 + 3} {x_1 + x_2 + x_3 + 3} \\ &= \frac{p(x)}{x_1 + x_2 + x_3 + 3} \left( x_2 + x_3 + 3 \right) \end{aligned}$$

Similarly, when \(i \ne 1\) and using (10): $$\frac{\partial}{\partial x_i} h_1(x) = \frac{p(x)}{x_1 + x_2 + x_3 + 3} \left( -x_1 \right)$$

The derivation for \(h_2\) and \(h_3\) is similar, so the whole Jacobian matrix is then: $$H(x) = \frac{p(x)}{x_1 + x_2 + x_3 + 3} \begin{bmatrix} x_2 + x_3 + 3 & -x_1 & -x_1 \\ -x_2 & x_1 + x_3 + 3 & -x_2 \\ -x_3 & -x_3 & x_1 + x_2 + 3 \end{bmatrix}$$

This completes all of the derivations needed for the filter and simulation.

Initially, we expect the nozzles to be at their nominal size, and so the estimated state, \(\hat{x}_0\) is: $$\hat{x}_0 = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}$$

The covariance associated with the initial knowledge (presumably learned from other machines) is: $$P_0 = \begin{bmatrix} 0.05^2 & 0 & 0 \\ 0 & 0.05^2 & 0 \\ 0 & 0 & 0.05^2 \end{bmatrix}$$

This is everything that's necessary to build the extended Kalman filter. You'll need to write functions for (3), (6), (15), (16), (17), and (21).

We can let \(a\) be 0.1, and we can let the input vector, \(u\), be 1 for all samples (for the sake of filtering, it doesn't much matter what it is, as long as we know it).

The simulation should take 300 steps, and the Monte-Carlo test should run 100 simulations.

For this problem, the estimate should track the truth pretty well over time, but there is significant (unpredictable) process noise, and so we will see that the filter can track the trends but not the instantaneous noise.

For the Monte-Carlo test, we expect the normalized estimation error squared (NEES) statistic to be a little low, near 75%. The remaining 95% statistics should all be very close to 95%. Just why the NEES is low is subtle: the process noise is calculated from the state estimate and affects the state estimate in a nonlinear way. The result is that errors in the state cause errors in the process noise calculation, which map back to the state. If we were to fix this problem, we could use a conservative, higher value for the process noise in the filter than we do in the simulation. We could also use consider parameters.

When you're ready to check your results against ours, see:

```
skfexample('hw2ans1'); % The frameworks method
skfexample('hw2ans2'); % The engine method
```

Or, see the results on the web:

`*kf`

v1.0.3 January 18th, 2017

©2017 An Uncommon Lab