Linear and extended filters:

- Propagate the state estimate to the current time;
- Propagate the error covariance (or information) as well;
- Predict the measurement;
- Form the error covariance (or information matrix) of the predicted measurement;
- Correct the state estimate with information from the measurement;
- And correct the error covariance as well.

The parts of this that correspond to the user's problem, like the propagation function, must be provided to `*kf`

for it to build the filter.

Linear and extended filters also have extensive options for speed and robustness.

This determines how the state is propagated.

`f`

When this option is selected, the propagation will call some user-provided function to update the state from the previous time to the current time. The interface will be something like:

`x_k = f(t_km1, t_k, x_km1, u_km1);`

The actual interface will depend on other options. See the Interfaces box to see what the filter expects for your chosen options.

When this option is selected, one must provide the name of the function in the box that appears.

When this option is selected, the propagation will use the state transition matrix, `F_km1`

, optionally with the input Jacobian, `F_u_km1`

(both discussed below), to update the state, as:

`x_k = F_km1 * x_km1 + F_u_km1 * u_km1;`

When this option is selected, one must provide the input-mapping matrix, `F_u_km1`

, in the box that appears (only if using the input vector; see below). This matrix can be a constant, an input, or a function of the state. When it's a constant, one must provide the value (an expression or the name of a variable in the workspace). When it's an input, nothing further is necessary, but the matrix will need to be passed to the filter on each step. When it's a function, provide the name of a function to call to calculate `F_u_km1`

. The interface will be the same as for `f`

above, namely:

```
F_u_km1 = F_u_fcn(t_km1, t_k, x_km1, u_km1);
```

Sometimes, one propagates the states one's self, for instance as part of a feedforward control system. In this case, there's no sense in asking the filter to perform work that has already been done. One can use the Direct propagation type to provide the propagated state estimate as an input.

`F_km1`

This matrix maps the change in state to the change in the updated state. It's really only called the “state transition matrix” for linear systems, where it fully updates the state, but we'll use the term even for nonlinear functions.

For linear systems, this is `F_km1`

in the propagation:

`x_k = F_km1 * x_km1 + F_u_km1 * u_km1;`

For nonlinear systems, it is the Jacobian of the propagation function, `f`

wrt the state, evaluated at the current state estimate. That is:

It can be a constant, an input, or a function of the current state estimate. The interface for the function will be the same as for the other propagation-type functions:

```
F_km1 = F_fcn(t_km1, t_k, x_km1, u_km1);
```

`Q_km1`

This filter architecture assumes that the true state evolves according to whatever propagation type is selected above *plus* a noise term, `q_km1`

, called the process noise. This is a discrete-time random number with zero mean and covariance `Q_km1`

.

The process noise covariance can be a constant, and input, or a function:

`Q_km1 = Q_fcn(t_km1, t_k, x_km1, u_km1);`

The process noise is assumed to be constant over the time step.

Note: if there is some map from “real” noise terms, \(q'_{k-1}\), to their effect on the state, say \(G\) in:

$$ q_{k-1} = G q'_{k-1}$$ where \(q'_{k-1} \sim \mathcal{N}(0, Q'_{k-1})\), then we use \(Q_{k-1} = G Q'_{k-1} G^T\), and we pretend that \(q_{k-1} \sim \mathcal{N}(0, Q_{k-1})\).If necessary, one can effectively use colored noise by adding noise terms to the state, or one could switch to an unscented Kalman filter, which does not make the assumption that the noise term is additive.

When this is selected, the input vector, `u_km1`

, will be an input to the filter and will be passed to all of the functions in the propagation section, including the nonlinear propagation function and the functions for `F_km1`

, `F_u_km1`

, and `Q_km1`

. For linear propagation, this implies that there must be an `F_u_km1`

matrix.

Just as for propagation, the observation function can be nonlinear or linear, or the predicted observation can be input directly to the filter. It can also simple select certain elements of the state.

`h`

When this option is selected, the filter will call some user-provided function to form the measurement from the propagated state estimate. The interface will be something like:

`z_hat_kkm1 = h(t_k, x_kkm1);`

where `z_hat_kkm1`

is the predicted measurement at sample k based on information from k-1, and likewise `x_kkm1`

is the predicted state at k given information from k-1.

The actual interface will depend on other options. See the Interfaces box to see what the filter expects for your chosen options.

When this option is selected, one must provide the name of the function in the box that appears.

Further, in some cases, it's useful for this nonlinear function to return the innovation vector, `y_k`

(the difference between the measurement and the predicted measurement, `z_k - z_hat_kkm1`

), instead of returning the predicted measurement. This is used when the two quantities can't be simply subtracted. When this is the case, select the box below, make sure `h`

returns `y_k`

, and note that the actual measurement, `z_k`

will now be an input to all of the observation-type functions.

```
y_k = h(t_k, x_kkm1, z_k);
```

When this option is selected, the observation will use the observation matrix (sensitivity matrix), `H_k`

, optionally with the input Jacobian, `H_u_k`

(both discussed below) to determine the predicted measurement, as:

`z_hat_kkm1 = H_k * x_kkm1 + H_u_k * u_km1;`

Note that there is a subtlety here; we're using `u_km1`

, the input vector from k-1 instead of from k. This is because “this sample” is considered to be sample k, and so the input vector has not yet affected the measurement; instead `u_km1`

was assumed to be acting when the measurement was made.

When this option is selected, one must provide the input-mapping matrix, `H_u_k`

, in the box that appears (only if using the input vector; see below). This matrix can be a constant, an input, or a function of the state. When it's a constant, one must provide the value (an expression or the name of a variable in the workspace). When it's an input, nothing further is necessary, but the matrix will need to be passed to the filter on each step. When it's a function, provide the name of a function to call to calculate `H_u_k`

. The interface will be the same as for `h`

above, namely:

```
H_u_k = H_u_fcn(t_k, x_kkm1, u_km1);
```

Sometimes, the measurement is really just some subset of the state, e.g. it might be the first three states and the seventh through ninth states, `z_k = x_k([1:3 7:9])`

. In this case, there's no need for a function to do the job, nor is there any need for the observation matrix (and its input will become inactive in the interface). When this option is selected, the filter will generate code that draws from the state estimate and estimate covariance appropriately instead of using `h`

and `H_k`

, allowing the filter to skip some unnecessary multiplication.

To use this form, provide the indices (1-indexed) of the state that correspond to the observation, such as `[1:3 7:9]`

for the above.

Sometimes, one propagates the states one's self, for instance as part of a feedforward control system. In this case, one may also form the predicted observation. Selecting this option allows the predicted measurement to be an input. This implies that the propagated state estimate is also an input (unless there was a propagated state estimate, how would one possibly have a predicted measurement?).

Further, instead of providing the predicted measurement and actual measurement, one can provide the innovation vector, `y_k`

, the difference between the two (`z_k - z_hat_kkm1`

). This should only be done when the innovation is actually not a simple subtraction for some reason.

`H_k`

This matrix maps the change in state to the change in the measurement.

For linear systems, this is `H_k`

in the observation:

`z_k = H_k * x_k + H_u_k * u_km1;`

For nonlinear systems, it is the Jacobian of the propagation function, `h`

wrt the state, evaluated at the current state estimate. That is:

It can be a constant, an input, or a function of the propagated state estimate. The interface for the function will be the same as for the other propagation-type functions:

```
H_k = H_fcn(t_k, x_kkm1);
```

`R_k`

This filter architecture assumes that the observation is a function of the true state according to whatever observation type is selected above *plus* a noise term, `r_k`

, called the measurement noise. This is a discrete-time random number with zero mean and covariance `R_k`

.

The measurement noise covariance can be a constant, an input, or a function:

`R_k = R_fcn(t_k, x_kkkm1);`

For different sensors, measurement noise is often uncorrelated, resulting in a block-diagonal `R_k`

matrix. In the case where all elemenets of the noise are uncorrelated, `R_k`

will be diagonal. In either case, the filter can take advantage of this for reduced run-time; for block-diagonal, it can implement “sensor groups”; for diagonal, it can use sequential (scalar) updates. See Correction Style, below.

When this is selected, the input vector, `u_km1`

, will be an input to the filter and will be passed to all of the functions in the observation section, including the nonlinear observation function and the functions for `H_k`

, `H_u_k`

, and `R_k`

. For linear observations, this implies that there must be an `H_u_k`

matrix.

The uncertainty associated with the state estimate can be expressed in many ways, and each has its own advantages.

This is the “normal” covariance type used by the Kalman filter. The covariance matrix is propagated and corrected on each step of the filter. When the number of states is smaller than the number of observations, it's also very the fastest option. However, the Kalman correction can lead to numerical problems when the covariance has eigenvalues that differ by many orders of magnitude. Choosing Joseph form is recommended to avoid this to a degree, but the UDU form (below) is better.

This form does not used the covariance matrix explicitly, but rather propagates and corrects a matrix, U, and vector, d, such that:

`U * diag(d) * U.' == P`

where P is the covariance and U is an upper uni-triangular matrix (zeros below the diagonal, ones on the diagonal, and other values above). In doing so, this method is operating on the “square root” of the covariance, allowing much better numerical stability. A UDU form can maintain the same precision using single-type floating point values as a Kalman filter can with double-type float point values. This is critical when the covariance has some large and some small elements.

When using sequential updates (see Correction Style), the UDU filter is nearly as fast as the Kalman filter. However, when using the full matrix update (“normal” correction style) or sensor groups, it is significantly slower. In the literature, UDU filters essentially always operate with sequential updates for this reason.

Information filters operate on the information matrix, the inverse of the covariance (for our Gaussian assumptions). This is useful for two reasons. First, they are faster when the number of observations is larger than the number of states. Second, they can indicate “no information” (corresponding to infinite covariance) -- which of course the case when the matrix is full of zeros.

Information filters should generally operate with the “normal” correction style or with sequential updates. Sensor groups should only be used when an “updates” vector is required, and the sensor group update

The observation vector usually consists of the results of many different sensors, and the errors of these sensors are often uncorrelated. When this is true, there are several ways that one can update the state and covariance using each sensors at a time, cutting down the matrix sizes and potentially dramatically reducing runtime. It's also the appropriate way to handle the case that different sensors may be updated at different times.

This method results in the “standard” Kalman filter update step -- the measurement noise covariance is not broken down into individual components. This generally takes the longest, but is the most familiar and general, and results in the most compact code.

This requires that the measurement noise covariance only contain terms on the diagonal. In fact, it assumes that the R_k “matrix” is actually just a vector of the terms on the diagonal. When this is the case, the Kalman filter need not invert an m-by-m matrix (where m is the size of the observation vector), but can instead invert m scalars, which is clearly much faster. This is also called scalar updates.

This method also allows one to include an `updates`

vector, specifying which measurements have been updates since the filter last ran.

For example, if `z_k`

is of length 5, and if its first two indices and last index have been updated with new measurements, `updates`

would be:

`[true true false false true]`

Note that if the measurement noise covariance is *not* diagonal but is constant, then one can find a matrix, `U`

, to diagonalize the measurement noise covariance. Suppose the full measurement noise covariance is `R`

. Then we can find:

`U * R_new * U.' == R`

by using an UDU decomposition (`udfactor`

). This will leave `R_new`

with just diagonals. In order to use `R_new`

in our problem, we must modify the observation and Jacobians:

```
H_new == inv(U) * H
z_new == inv(U) * z == h_new(t_k, x_k, ...);
```

Note that U will be upper uni-triangular (zeros below the diagonal, ones on the diagonal, and other values above). This is easy to invert using back-substitution.

One can then use `R_new`

, `H_new`

, `h_new`

, etc., in order to take advantage of sequential or scalar updates.

Note that this approach only works if the measurement noise covariance is constant or at least always diagonal. The time spent finding the diagonalizing matrix, `U`

, will cost about as much as the matrix inversion it is meant to replace.

When using multiple sensors that communicate at different rates, one rarely has a single up-to-date measurement vector. In this case, one will wish to use only the update-to-date values from the measurement vector in order to update the state estimate. By using “sensor groups”, this is made (fairly) easy.

Suppose values from sensor 1 become the first three values of the measurement vector. Then sensor 2 occupies the next value. Perhaps sensor 3 occupies the final two values. The sensor groups would look like this:

`groups = {[1 2 3], 4, [5 6]}`

This would go in the box at the right.

The generated filter would have an input called `updates`

. This will be a vector of length n, where n is the number of sensors (3 for the example above). To say that the first and third sensors in the measurement vector are up-to-date, this would be passed in for `updates`

:

`[true false true]`

The filter would then only use the relevant measurements for the update (along with the relevant rows from the Jacobians and covariance matrices). Note that there is assumed to be no covariance between errors in the different sensors.

This is also generally faster than the “normal” update, because it breaks the full matrices into smaller individual-sensor-sized chunks and operates on those smaller chunks.

In the extreme that each sensor reading is a scalar, this will be the same as sequential (or scalar) updates.

When this is selected, the `updates`

variable will be passed into any observation-type functions (the functions for `h`

, `H_k`

, `R_k`

, etc.).

This is one of the most powerful parts of the `*kf`

Estimator Design Tool, and helps design proper algorithms for realistic scenarios, where there are multiple sensors communicating at different rates.

The Kalman correction process implies that the covariance of the predicted measurement is known. This covariance is formed by first propagating the estimate covariance, converting the estimate covariance to an equivalent observation covariance, and then adding on the additional sensor noise covariance. However, when these steps involve calculating `H_k`

or `R_k`

from the state estimate, then they are likely a little wrong, since the estimate is a little wrong. In order to hone in on better values, the Kalman correction process can be repeated several times, each time using the updated state estimate to calculate more accurate values for `H_k`

and `R_k`

. This only makes sense for highly nonlinear observations where the differences in these matrices are expected to matter.

Iteration repeats the most costly portion of the filter update, the Kalman correction, a number of times, and so iteration is a very expensive option. However, it can make a dramatic difference on certain problems. For more than 2 or so iterations, it may be less expensive to run an unscented filter instead.

The normal number of passes is 1 (the default), so a value above one results in an iterated extended filter.

Check this box to build a Schmidt-type filter, which allows more conservative corrections to the covariance/information matrix.

Sometimes, there's significant uncertainty in a true process, more than can be accounted for with the zero-mean process noise. For instance, maybe there's uncertainty in the air density, but we don't want to estimate air density because it's unobservable or just too tedious. We can instead “consider” the effect of the uncertainty in air density on the state over time by making it a consider parameter.

Consider parameters are random variables with some known covariance. Further, there may be covariance between the state and the consider parameters (unlike the process noise, which is assumed to be independent). When we use consider parameters in a filter, we keep track of this changing covariance information, but we never estimate the parameters themselves.

There are multiple types of consider filters. A common application uses this additional covariance to inform the Kalman gain calculation. In doing so, the additional covariance makes the gain larger, pushing the estimate closer to the measurement than to the prediction. This is called a Schmidt-type filter, and it helps prevent a filter from being overconfident in its predictions. To restate: the primary advantage is the ability to allow uncertainty in the model that is not well-handled by the zero-mean, state-indenpendent process noise.

This tool takes the convention that the consider parameters are theselves zero-mean. That is, for air density above, the consider parameter would actually be the difference in air density from some nominal value. One can then build the expected value of air density directly into the model. This is usually easiest.

When using consider parameters, the covariance/information matrix will need to represent an additional relationship: the covariance between the estimate and the consider parameter. When using a covariance matrix, this covariance will be maintained separately.

When using UD form, the U and d factors will represent the covariance of an augmented system created from the state and the consider parameters. It will be nx+nc-by-nx+nc, where nx is the dimension of the state and nc is the dimension of the consider parameters. For UD forms, the consider covariance, P_cc (below) is always assumed to be constants.

When using an information matrix, it will similarly be nx+nc-by-nx+nc and represent the augmented system. The consider covariance itself is expected to remain constant.

`F_c`

This is the Jacobian of the propagation function, `f`

, wrt the consider parameters.

`H_c`

This is the Jacobian of the observation function, `h`

, wrt the consider parameters.

`P_cc`

This is the covariance of the consider parameters. It can be a constant or an input to the filter. It is commonly constant, and for UD and information filters, it must not change.

`P_xc_0`

This is the initial covariance between the state estimate and the consider parameters. For `nx`

states and `np`

consider parameters, this is `nx`

by `np`

.

When no initial covariance is known, this is often all zeros.

Check this box to protect against division by zero when calculating the gain. This costs a small amount in runtime but is recommended for the sake of safety.

Check this box if the innovation vector, `y_k`

(the difference between the measurement and the predicted measurement), should be an output from the filter. This is useful in understanding filter performance over time. For instance, the innovation vector should be reasonably close to white for a proper filter.

When this is output, the Monte-Carlo test (if generated) will use this output for statistics on filter performance.

Check this box if the innovation covariance, `P_zz_k`

, should be an output from the filter. This is useful in understanding filter performance over time.

When this is output, the Monte-Carlo test (if generated) will use this output for statistics on filter performance.

`*kf`

v1.0.3 January 18th, 2017

©2017 An Uncommon Lab