## Setting Up the Engine

The first part of creating a filter is to specify some basic facts about the filter you'll be generating, such as the desired name, initial conditions, an optional reference state, and an overall architecture.

### Filter Name

This name will be used to generate all of the various files. For instance, for a name of my_filter, the generated files names will be (depending on which options are selected):

• Primary files (for MATLAB target language)
• my_filter_step.m: Primary filter function for a single step
• my_filter_init.m: Filter initialization
• Examples and testing files
• my_filter_example.m: Example initialization and simulation of filter
• my_filter_mc.m: Monte-Carlo test of filter
• Interface files
• my_filter_model.slx: Simulink block wrapper for filter implementation
• my_filter_c_gen.m: Function to generate C code from MATLAB code (running this requires MATLAB Coder).
• my_filter_mex_gen.m: Function to generate compiled MEX file wrapper for filter implementation. This function is automatically executed immediately after filter generation, creating my_filter_mex.ext where ext is the appropriate extension for each platform (Windows, OS X, Linux).

### Initial State

The initialization function can set the initial estimate, and so it will need to know what to use. Further, the example simulation and Monte-Carlo test will need to use some initial estimate. This box should contain text that evaluates to the initial state, for example [1; 0; 0] or my_x_0 where my_x_0 exists in the MATLAB workspace.

### Initial Uncertainty

The initialization function can set the initial covariance matrix, information matrix, or set of particles. Kalman filters use the covariance matrix, information filters use the information matrix, and particle filters use a set of particles. Even when working with square-root forms, this box will contain the full matrices, and the square-root form will be calculated and used in the initialization.

This box should contain text that evaluates to the initial covariance, for example diag([1 2 3]) or my_P_0 where my_P_0 exists in the MATLAB workspace.

### Include State & Uncertainty in Init?

If the initialization should set the initial state estimate and uncertainty representation, select this box. Otherwise, the user will have to set the initial state and covariance (or UDU, information, or square-root information) matrix manually before invoking the filter.

### Example Measurement Vector

The engine will need to make some decisions involving the size of the measurement vector. Please provide an example measurement vector here; only its dimensions will be used. This can be code (e.g., 2) or a variable in the workspace.

### Example Input Vector

The engine will need to make some decisions involving the size of the input vector. Please provide an example input vector here; only its dimensions will be used. This can be code (e.g., 2) or a variable in the workspace.

The generated code will generally need to call some of the user's functions, such as the propagation function. These functions may need more inputs than the engine knows to provide. Add a list of variable names here to include them; in the generated code, these will be inputs to any function that the filter calls.

This box is just for the names of these variables and should not include values. For instance, my_var_1, my_var_2, would be a valid input. In the generated code, these will show up with a prefix of user_ (to make sure their names don't interfere with internal variable names), such as user_my_var_1, user_my_var_2. This will result in the filter calling functions, such as the propagation function, as:

x_k = f(t_km1, t_k, x_km1, u_km1, user_my_var_1, user_my_var_2);

Using anything other than variable names (such as values) is invalid. For instance, my_var_1, [1 2 3] would be invalid.

### Filter Type

This is where filter selection begins. Choose from Linear and Extended Filters, Unscented Filters, or Particle Filters. The selection will change the options displayed in the center pane.

#### Linear & Extended Filters

This architecture includes many of the “named” filters, such as linear Kalman filters, extended Kalman filters, information filters, UD filters, Schmidt-Kalman filters, etc., and the various additional options for these. All forms represent linear filters or filters linearized about the current state estimate. They are well understood, relatively fast, and have a long legacy and so have become the workhorse filters for recursive state estimation.

Their chief assumptions are that the propagation and observation functions are modeled well and uncertainty associated with those processes is zero-mean and Gaussian. Further, it is assumed that the propagatoin and observation functions are differentiable, and that its reasonable to calculate their various Jacobians.

They work by propagating a state estimate forward in time and using a linearization of the propagation function to propagate the error covariance. They further use the propagated estimate to predict the observation, and they use a linearization of the observation function to determine the error covariance associated with the measurement. The covariance matrices are used to determine how to weigh between the propagated state and the state indicated by the measurement.

#### Unscented Filters

Unscented filters also assume that the propagation and observation functions are reasonably well modeled, but they do not require that their errors are zero-mean. Further, they do not require Jacobians. Instead of linearizing, they propagate the covariance by propagating several points “near” to the estimate (called sigma points) and forming the predicted observation of each of these. These collections of points ultimately represent the covariance information of the unscented transform. Just like with linear and extended filters, the new covariance information is used to weigh between the mean of the propagated states and the state indicated by the measurement.

Unscented Kalman filters are usually a small amount slower than their linear and extended counterparts (perhaps requiring 150% of the runtime), though they are on the same order ($$O(n^3)$$). Their primary advantages are that they tend to be slightly more accurate and are much easier to set up, since one need not analytically determine the Jacobians for the problem. Notably, they do not assume that the propagated prior estimate will be the mean of the propagated uncertainty, and so they avoid the divergence problems that may be seen with extended filters.

#### Particle Filters

Particle filters are far more broad than either of the other filter architectures, making very few assumptions. However, they are also very slow. Where an unscented Kalman filter might require 21 sigma points, a particle filter might require 1000-10,000 particles.

A particle filter does not assume that uncertainty is Gaussian or even unimodal. It can work on problems with virtually any error characteristics. It does so by representing a large number of different potential states, particles. When a new measurement comes in, each state is propagated forward to the current time and each is then used to predict the measurement. These predictions are compared with what was actually observed. Those closer to the measurement are given more likelihood. Over time, a new set of particles are drawn such that new particles are more likely to be near particles from the previous set that had high likelihood. This continues, resulting in a clustering of particles that are the most likely points. If all goes well, the particles will cluster tightly in only one area. The estimate from a particle filter is not usually a mean of all of the particles. Instead, the most likely single particle is often used.

Obviously, running all of these particles takes a long time. The benefit is that a particle filter can work in virtually any problem with a decent model. For this reason, the particle filter architecture in the engine has not been tailored for runtime in the same way as the other filter architectures; rather, it was tailored to be very general and readable. Simply: particle filters are not often used where runtime is a major concern (at least, not as much so as for the other filters).

Interestingly, because particle filters represent the estimates with discrete points, the error in the discrete points can be larger than the error of a very well-tailored unscented filter. There are many ways one might combine techniques.

For those familiar with optimization theory, a particle filter is a type of genetic algorithm, where fitness at each generation is the individual's nearness to the measurement, and where reproduction is asexual with mutation.

### Convert Filter To...

Use this button to convert between linear and extended, unscented, and particle filter architectures. Since functionality between these architectures is not always 1:1, any ambiguity or change in interfaces will be presented before the conversion takes place.

Any option in the new filter that cannot be determined from the old filter will be left at its default.

You can always switch back to your old filter architecture by re-selecting the appropriate type from the Filter Type drop-down.