*kf
by An Uncommon Lab

Workflow

People spend a lot of time talking about filters as algorithms, but the design of a working filter requires far more than just the selection of the right algorithm and a good coder. It starts with framing the problem to be solved and continues through integration and testing. This author has seen “delivered flight code“ that was impossible to integrate in any realistic system and “working, trusted filters” that were actually doing essentially no filtering in operation. On the other hand, a workflow-centric design process will help create the right environment for the development of the filter, will make implementation proceed naturally, can keep integration easy, and can demonstrate that it's working (or know when it isn't). In short, a good workflow makes life better. Let's talk about that process.

Determine the environment for the filter.

Where is the filter going to run? On a computer driving around in a car? On a desktop computer crunching incoming data? We need to design a filter with a good understanding of these constraints and the quality of the data we'll have. For instance, you wouldn't use a particle filter on a tiny embedded processor with a fairly linear problem and fast sensors, but you might use one on a desktop computer that's monitoring some complex process.

What data do you actually need? If adding a sensor makes the estimation significantly more reliable, it may make sense to add it. Conversely, if some sensor is contributing virtually nothing, it may suffice to simply remove it, making the filter faster and reducing the mechanical, electrical, and software complexity.

Is the data clean? State estimators make assumptions about errors in the data, but they generally don't include provision for clearly-wrong data, such as a floating point number that's been corrupted from 0.023 to 1.234e208. Cleaning the data can give the filter a better place to work.

Determine the dynamics and measurement models.

Does it make sense to use polar or cartesian coordinates? Should the units be meters or nanometers? Do any parts of the state have constraints, and can we map the constrained variables (e.g., a unit vector) to unconstrained variables (e.g., two angles) usefully? Is it convenient to describe the process and sensor noise for this representation? Does it linearize well? Are there problems with states rapidly switching, such as from \(-\pi\) to \(+\pi\)?

Choosing the right dynamics and measurement models can make filters run more quickly and with better numerical stability.

For example, by representing the error in an estimate as our state, dynamics problems can become fully linear, allowing a linear Kalman filter, and filtering on errors is often the best option.

As another example, quaternions (which are implemented as four-element vectors) are often used to describe orientation, but this performs poorly in filters, because they are more like three pieces of data. This causes the associated covariance matrix to potentially have a zero eigenvalue for one of the four elements (after roundoff), which can then wreak havoc on the estimator. Instead, three-element “errors” in the orientation are often used, as these remain stable. Further, being three instead of four states means that this method runs a little faster.

Select the filter architecture and options.

Here is where *kf comes in for the first time. Can we afford to use a particle filter? Can we use an unscented Kalman filter, or must we expend the extra effort to use an extended Kalman filter, which runs a little faster? We can start with easy, generic options, and wait until we've done some testing before getting really specific.

Test on a small problem.

It's best to generate small examples to make sure that the filter design is doing the things it's supposed to, starting easy and growing more difficult. These can often be stand-alone examples, not part of some giant simulation, and *kf can often generate them automatically. As problems are uncovered, we can iterate on the design options, the architecture, maybe the problem representation, and even the selection of sensors/data sources.

Testing is where we begin the feedback cycle that narrows in on the right design, and so getting to testing quickly should be a priority over getting every last little thing designed first.

Test the implementation.

Before going into any further detail on the design, it's often helpful to make sure things are going in the right direction. For instance, can the filter run quickly enough for the target application? We can use *kf to generate different types of code and maintain a focus on the final implementation, so it's feasible to begin testing for the target environment at any point in the design. It may be that the filter takes too long, and we'll need to go select different options or even a different architecture. It may be that the filter is faster than expected, and so a more expensive but more accurate filter is possible. Further, when working on teams, this will give others (say, the embedded software team) an early version of the filter to work with while they're still iterating on their own designs. The earlier you start integration, the better the result.

Test in detail.

To determine their performance, most filters are exercised in a realistic simulation of some sort. These simulations might include special distributions for the noise terms, different sensor update rates, erroneous sensor readings (such as occassionally corrupted data), and differences between the true dynamics and the model used by the filter. Of course, only an intelligent user can create these simulations based on her specific problem. Where *kf can help is by providing numerous avenues for integration of the implementation. It's a good practice to compare results from this “big” simulation against those predicted by the “small” examples above. In fact, the small examples could be the starting point for the creation of the more realistic simulation.

Deploy.

We've already begun implementation and integration testing, so final deployment is trivial; it's simply the most up-to-date version. The important part of this step, then, is to examine the real-world performance against the simulations. How well did we model the environment? Are we finding any issues that were not uncovered in simulation? How can we feed this information back into our modeling assumptions? This may involve simple regression fits or more complicated system identification techniques. When we can close the loop between what was truly observed and what assumptions we made in the very beginning, then we'll have a solid understanding of the problem and, very likely, an excellent state estimator.

Next Steps

Ready to dig in to the *kf engine on your own problem? Here's the in-depth doc.

Table of Contents

  1. Environment
  2. Models
  3. Architecture
  4. Prototyping
  5. Integration
  6. Testing
  7. Deployment
  8. Next Steps