Yes, there are many Kalman filter implementations in Bayes++. In
Bayes++, the Kalman Filter and the Extended Kalman Filter (EKF) are
implemented by the *Covariance_filter* Scheme.

Pretty quick! Depends on the filter Scheme used. The *UD_filter*
is smallest and fastest Scheme. The best way to speed things up is to
work on optimized use of uBLAS and to optimized uBLAS itself for you
compiler.

A 'Scheme' is the term used in Bayes++ to define a particular numerical implementation of a filter. Each Scheme is based on one of a few statistical representations of state. Different schemes work on these statistics using different numerical techniques. The aim of Bayes++ is to provides common interfaces to Schemes so you can pick an choose which to use.

For a simple test this may be true. If you have ever tried to deal with the wide variety of numerical failures and normalizations required to deal with discontinues model, you will realize that there is more to implementing a Kalman filter then a hand full of linear algebra equations!

Many DIY Kalman filter implementations fail as they do not maintain the symmetry of matrices. If this is problem is corrected, they usually use numerically inaccurate algorithms and also will silently continue to operate even when the results no longer make sense. The matrices are ill conditioned! All these hard problems have been solved for you by Bayes++.

However Bayes++ most powerful feature is **not** that it just
does things correctly! It provides a consistent methodology to apply
multiple Bayesian filtering techniques. Once you have codified the
models that represent a problem you can solve your problem with many
difference Bayesian filtering techniques. These may be simple linear
filters such as the *Information_filter* scheme, or even a
particle filter such as the *SIR_filter* scheme.

Predict models represent the noise with it's variance `q`

and noise 'coupling' `G`

.

These together represent the process (predict) noise. In this case the process
model is `x(k+1) = f(x(k)) + G.q(k)`

where `q(k)`

in Gaussian white noise
with variance `q`

.

- This leads to a Kalman filter covariance update for the linear case:
`X(k+1) = F.X(k).F' + G.q.G'`

- This is equivalent to
`X(k+1) = F.X(k).F' + Q`

where`Q = G.q.G'`

The are a couple of reasons for expressing the process noise in this way. a) For factorised filters (such as the UD_scheme) it is in the perfect form b) The same noise is often additive to more then one element of the state. In this case the size of q is less then x and G provides a physically easily interpreted of how the elements of q effect x.

The introductory and conceptual information can be found in the
Bayesian Filtering Classes document.
The documentation generated by Doxygen
provides a complete reference of all the class and member names and
their relationships. For information on a particular Scheme and how it
works it is best to look at its individual header file. The filter
class headers **BayesFlt.hpp** also provide information on commonly
used and inherited class members such as state variables.

High on my priority list is make this component information visible in the Doxygen documentation.

`my_filter.x << my_filter.X << endl`

What does this mean?*my_filter* is a variable of type *Unscented_scheme*. This is one of the many filter Schemes.
In the class hierarchy *Unscented_scheme* inherits from the filter class *State_filter*.
This class defines the member variables `x`

and `X`

. The former is a vector and stores the estimated state.
The latter is a Matrix and stores the estimated state covariance. To understand these variables mean it is worth spending some time with a Kalman filtering text book or web site.

Yes! Bayes++ was developed to provide the maximum functionality in C++. A good C++ text book will help you understand how Bayes++ works. There is no need to learn C programming first. Learning C is not a good introduction to modern C++ programming techniques used in Bayes++. I would recommend Deitel and Deitel, "C++: How to Program", Second Edition, Prentice Hall, ISBN 0-13-528910-6. It is an excellent beginners book; and includes many useful tips and a thorough understanding of the language.

Although many things have been added to Bayes++ over the last two years they have only added to the variety of implementations. Bayes++'s interface has now reached a very mature stage with little or no change required to add new Schemes. Be aware however that the Matrix support implementation (anything in namespace Bayesian_filter_matrix) may change to accommodate matrix library changes.

The implementations of filtering Schemes included in the web release, have all been tested with a standard range angle observation problem. I also use the filtering Schemes for my own work, and so do others at the Australian Centre for Field Robotics and all over the world.

Bayes rule is usually defined in term of Probability Density Functions. However PDFs never appear in Bayes++. They are always represented by their statistics. This is for good reason, there is very little that can be done algorithmically with such a function. However the sufficient statistics, given the assumptions of a filter, can be easily manipulated to implement Bayes rule. This is essential what Kalman developed for linear systems.

Each filter scheme is derived from one or more virtual base
classes that represent the statistics used. For example the
*Kalman_state_filter* and *Sample_filter* base classes.

Bayes++ uses the **uBLAS**
library for all it matrix and vector containers and linear algebra
functions. **uBLAS**
is part of the larger **Boost**
portable C++ source libraries. uBLAS is an excellent basic linear
algebra library. The interface and syntax are easy to use. It
provides a wide variety of matrix and vector containers and a
complete set of Basic Linear Algebra operations. The implementation
and structure can incorporate many future enhancements and efficiency
improvements. The more I use uBLAS the more I like it! See also my
note on Effective
uBLAS on the Boost Wiki.

Credit for uBLAS goes to Joerg Walter and Mathias Koch. Many thanks!

Older releases of Bayes++ support both uBLAS and MTL the Matrix Template Library. Future releases of MTL may also be of interest to Bayes++. However at present nothing is being publicly released so I will await the outcome.

In principle it is possible to use a different matrix library when
Bayes++ is built. This just requires a new version of **matSubSup.hpp**
to be found before the one supplied in Bayes++ itself. However
Bayes++ makes extensive use of uBLAS syntax, so a change is a
significant task.

I get link errors, 'sgetrs_', 'dgetrf_', etc are missing, why?

Normally Bayes++ does not need LAPACK at all. These functions are part of the LAPACK linear algebra library.

The LAPACK functions for QR factorization are only used by the
*Information_root_scheme*. So unless you use this scheme or the
**uLAPACK.hpp** interface functions directly you do not need
LAPACK and should not have any link problems.

If you do wish to use the *Information_root_scheme*
then you will need to link with LAPACK. There are several ways
to do this depending on you requirements and system. If you use a
Linux distribution it probably has LAPACK available as a package.
Then all you need to do is install the package and add **-llapack
-lg2c** to your link options.

If you use Windows then you will have to do a lot more yourself.
The whole LAPACK library is available from **www.netlib.org** in
source form. Look for CLAPACK which is the C translation of the
Fortran original. It is not necessary to compile the whole library,
it is possible to use individual functions which can be downloaded as
separate files.

No. The Scheme and Model size's are run time variables specified when the class is constructed. This makes sense, as we want to be able to create models and filters of varying sizes. Of course this is essential for applications like SLAM.

If size was a template parameter (and therefore fixed at compile time) then otherwise identical classes of different size would have a different type. This would make them hard to store in containers and would defeat some of the polymorphic properties which allow algorithms be easily chosen based on combined Scheme and Model types.

Yes and No. For each different size the compiler must produce code for that instance. For small sizes (1,2,3 and maybe 4) this makes sense. For more general sizes the number of possible different code versions soon becomes unmanagable. This results in incredible code bloat and indirectly slower code.

There should be a method by which the classes allow you to choose the matrix types and their storage method. For example storing fixed size matrices with uBLAS bounded_array could make things more efficient.

At the moment the only way
to do this is to provide an alternative to the **matSupSub.hpp** header. This is
ugly and only allows you to choose a matrix implementation for the whole
application. For example I can compile and test Bayes++ with sparse matricies.

This should be be done by templateising all the classes! I decided not to do this for three good reasons:

- Compiling is slow and debugging is cryptic. I have left the templatising boundry at the matrix library interface. Maybe using some future C++ compilation system this will change!
- Scheme and Model class type would depend on these implementation details. Otherwise identical classes would have a different type as mention above it sizes where fixed with template parameters.
- Many algorithms require additional matrices numeric working. The
*best*implementation of these is not directly related to the matrix types in the interface. The these implementation details would have to be exposed so their type can be specified.