Looking at examples is nice, but sometimes working through a problem from scratch is the best way to learn. This homework problem presents an easy dynamics and filtering problem, and it's up to you to build a filter and simulation for it, see that the results look good, and run a Monte-Carlo test to see that the statistics are appropriate.

The filter can be built using the Kalman Filtering Framework functions or
with the `*kf`

engine. Of course, it's best to try both methods.

We'll discuss the dynamics and observation stuff below. When you're ready, you can start building the propagation function, measurement function, and setting up the filter.

As you go along, it may be helpful to look at other examples, like the pendulum problem or Mr. Eustace's jump, and be sure to reference the online documentation frequently:

http://www.anuncommonlab.com/doc/starkf/

When you're done, or if you get stuck, you can compare your results with some potential solutions:

```
skfexample('hw1ans1'); % The frameworks method
skfexample('hw1ans2'); % The engine method
skfexample('hw1ans3'); % The "basic filter" method
```

However, these are just three solutions out of many, and variations are possible within all three.

Like many homework sets, this one starts with a spring-mass-damper system. Let's call its position relative to an equilibrium \(p\), velocity \(v\), acceleration \(a\), mass \(m\), spring constant \(k\), and damping \(b\). The acceleration at any moment in time is then: $$a(t) = \frac{1}{m} \left( -k p(t) - b v(t) \right)$$

Expressing this with a matrix instead: $$a(t) = \begin{bmatrix} \frac{-k}{m} & \frac{-b}{m} \end{bmatrix} \begin{bmatrix} p(t) \\ v(t) \end{bmatrix}$$

Let's use \(x = \left[p, v\right]^T\) as the state vector. It's then easy to see (dropping the dependence on time) that: $$\dot{x} = \begin{bmatrix} v \\ a \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ \frac{-k}{m} & \frac{-b}{m} \end{bmatrix} \begin{bmatrix} p \\ v \end{bmatrix} = A x $$

We thus have the Jacobian of the continuous-time dynamics. Discrete filters, however, require the Jacobian of the discrete-time dynamics -- that is, we need a matrix that propagates \(\Delta t\) into the future: $$ x(t + \Delta t) = F x(t)$$

We can convert with several options. The best is: $$F = e^{A \Delta t}$$

where \(e\) is a matrix exponential. For linear systems, this is exact. For nonlinear systems, this is a good approximation when \(\Delta t\) is small.

Instead of using \(t\) and \(t + \Delta t\) all over the place, let's use sample indices so that \(x_k\) is the state at the kth sample. We have: $$x_k = F x_{k-1}$$

For Kalman filtering problems, however, we assume that there's also some amount of noise in the system -- something we can't know about the dynamics. Let's suppose that the noise on this system is an unknown acceleration, \(q\) that is approximately constant for the length of the sample. The additional acceleration causes a change in velocity of \(\Delta t q\) and a change in position of \(\frac{1}{2} \Delta t^2 q\). So, we can now say: $$x_k = F x_{k-1} + F_q q_{k-1}$$

where: $$F_q = \begin{bmatrix} \frac{1}{2} \Delta t^2 \\ \Delta t \end{bmatrix}$$

We'll say that \(q\) is drawn from a normal distribution with zero mean and variance \(Q\). That is, \(q \sim N(0, Q)\).

For Kalman filters, it's often more computationally efficient to express the process noise as an “effective process noise”, \(q_e\): $$ q_{e, k-1} = F_q q_{k-1}$$

so that the dynamics take the form: $$x_k = F x_{k-1} + q_{e, k-1}$$

Kalman filters work with the covariance of the noise term, so what is the covariance of the effective process noise, \(Q_e\)? $$\begin{aligned} Q_e &= E(q_e q^T_e) \\ &= E(F_q q_{k-1} q^T_{k-1} F^T_q) \\ &= F_q E(q_{k-1} q^T_{k-1}) F^T_q \\ &= F_q Q F^T_q \end{aligned}$$

Let's summarize. The discrete-time propagation function is simply: $$x_k = F x_{k-1}$$

The Jacobian of the discrete-time dynamics is: $$F = e^{A \Delta t}$$

where: $$A = \begin{bmatrix} 0 & 1\\ \frac{-k}{m} & \frac{-b}{m} \end{bmatrix}$$

And the effective process noise is: $$Q_e = F_q Q F^T_q$$

In order to actually implement this system, we'll need values for the various constants Let's use: For the process noise, let's use a standard deviation of 0.1 m/s^2; the variance, Q, is the square of this value. $$\begin{aligned} m &= 10 m \\ k &= 5 N/m \\ b &= 2 N/(m/s)\\ \Delta t &= 0.25 s \\ Q &= \left(0.1 m/s^2\right)^2 \end{aligned}$$

This part is straightforward. Let's suppose we measure the position only. The measurement is then: $$z_k = \begin{bmatrix} 1 & 0 \end{bmatrix} x_k + r_k$$

where \(r_k\) is the measurement noise at sample k. Let the distribution be \(N(0, R)\) where the variance, \(R\), is \(0.1^2\) .

The Jacobian of the observation function, \(H\), is clearly: $$H = \begin{bmatrix} 1 & 0 \end{bmatrix}$$

The final pieces necessary to build a simulation and filter for this system are the initial state and initial estimate covariance. Let's suppose that the initial covariance is: $$P = \begin{bmatrix} 1 m^2 & 0 \\ 0 & 2 \frac{m^2}{s^2} \end{bmatrix}$$

That is, the initial estimate error has a standard deviation of 1m in position and \(\sqrt2\) m/s in velocity.

Let's suppose that the mean initial state, which will be the initial state of the filter, is: $$\hat{x}_0 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$

When creating a truth simulation to exercise the filter, the initial true
state should be different from the initial estimate. Usually, the initial
true state is taken to be the initial estimate plus an error drawn from
\(P_0\). When using `kffsim`

, `kffmct`

, or the automatically generated
tests from the `*kf`

engine, this is handled automatically. However, if
you're using a different approach, you'll need to draw the initial truth
with something like this:

```
% Draw the initial truth based on the initial estimate. This will
% ensure that P_0 is accurate.
x_true_0 = x_hat_0 + mnddraw(P_0);
```

That wraps up the description of the problem. Now it's your turn to try implementing a filter. Here are some final tips:

- When using the
`kff`

functions, note that`kffsim`

can be used for a single simulation and`kffmct`

for a Monte-Carlo test. These make for quick development work. - When using the
`*kf`

engine, be sure you're generating the example simulation and Monte-Carlo test as well. These get you more than half way there. - If you choose to solve this problem using one of the built-in basic filters, you'll have to write your own simulation and Monte-Carlo test. This takes about 3-4 times as much code as the above options, but is a good exercise.

For all possibilities, let the total simulation time be 30 seconds. That's enough to see convergence and plenty of time history of stable tracking.

For this problem, the estimate should clearly track the truth after about the first second or so. In the Monte-Carlo test, the theoretical and empirical error statistics should closely match (that is, about 95% of the estimate errors should fall inside the 95% probability bounds).

When you're ready to check your results against ours, see:

```
skfexample('hw1ans1'); % The frameworks method
skfexample('hw1ans2'); % The engine method
skfexample('hw1ans3'); % The basic filter method
```

Or, see the results on the web:

When you've finished this filter, homework problem 2 will be waiting for you.

`*kf`

v1.0.3 January 18th, 2017

©2017 An Uncommon Lab