Digital Transformation & Software Engineering Services
Sensor Fusion Algorithms For Autonomous Driving: Part 1 — The Kalman Filter and Extended Kalman Filter

Sensor Fusion Algorithms For Autonomous Driving: Part 1 — The Kalman Filter and Extended Kalman Filter


Tracking of stationary and moving objects is a critical function of Autonomous driving technologies. Signals from several sensors, including camera, radar and lidar (Light Detection and Ranging device based on pulsed laser) sensors are combined to estimate the position, velocity, trajectory and class of objects i.e. other vehicles and pedestrians. A good introduction to this topic can be found at:

One may question — why do we need several sensors? This is because, each sensor provides different types of information about the tracked object position with differing acuracies especially in different weather conditions. For e.g. a lidar based sensor can provide good resolution about the position but can suffer for accuracy in poor weather. On the other hand, the spatial resolution of a radar sensor is relatively poor compared to laser but provides better accuracy in poor weather. Also, unlike a lidar sensor, a radar can provide information about the velocity and bearing of the object. Laser data is also more computationally intensive because a laser sends lots of data about each individual laser point of range data, which you have to make sense of in your algorithm. The techniques used to merge information from different sensor is called senssor fusion. For reasons discussed earlier, algorithms used in sensor fusion have to deal with temporal, noisy input and generate a probabilistically sound estimate of kinematic state. This blog post covers one of the most common algorithms used in position and tracking estimation called the Kalman filter and its variation called ‘Extended Kalman Filter’. In future articles we will cover other techniques such as Unscented Kalman Filters and Particle filters.

1. The Basic Kalman Filter — using Lidar Data

The Kalman filter is over 50 years old, but is still one of the most powerful sensor fusion algorithms for smoothing noisy input data and estimating state. It assumes that location variables are gaussian i.e. can be completely parametrized by the mean and the covariance: X∼N(μ, σ²)

As information from the sensor flows, the kalman filter uses a series of state prediction and measurement update steps to update its belief about the state of the tracked object. These predict and update steps are described below.

State Prediction:

We will use a simplified linear state space model (see to illustrate the workings of the filter. The linear state state of a system at a time t can be estimated from state at time t-1 according to the equation(1):

Measurement Update

The next part of the Kalman filter algorithm is to use real measurements z to update the predicted state x′ by a scaling factor (called the Kalman Gain) proportional to the error between the measurment and the the predicted state.

You can find the derivation of the measurement update equations at:

An Example

Enough of theory! Let’s try some code to illustrate the basic workings of the KF. Here, we simulate an object whose state is modeled as a 4-dimensional vector x=[px py vx vy] In our case, the measurement sensor is laser sensor that returns the position data but no velocity information. In order to observe velocity we need to use a Radar sensor. This will be covered in the next section when we discuss Extended Kalman filters. We will start with a set of assumptions:

Model Assumptions

Psuedo code

The basic code for the Kalman filter steps is listed below.

"""prediction step"""
def predict(x, P):
    x = (F * x) + u 
    P = F * P * F.transpose() #Acceleration noise Q is assumed to be zero
    return x, P

"""measurement update step"""
def update(x, P,z):
    # measurement update
    Z = matrix([z])
    y = Z.transpose() - (H * x)
    S = H * P * H.transpose() + R
    K = P * H.transpose() * S.inverse()
    x = x + (K * y)
    P = (I - (K * H)) * P
    return x, P

The final step iterates through the measurements and applies the prediction and update steps of the filter as listed above.

plot_position_variance(x,P,edgecolor='r')  #plot initial position and covariance in red   
for z in measurements:
    x,P = predict(x, P)
    x,P = update(x, P,z)
    plot_position_variance(x,P,edgecolor='b') #plot updates in blue

The above figure illustrates each iteration of the kalman filter for the px and py dimensions of the state vector along with the positional covariance. The red circle is a visualisation of our initial process uncertainty. As we go through the incremental predictions and measurement updates, we begin to develop a better estimate of state with less uncertainty (variance). As you can see, the final state vector x=[11.99, 2.05] is very close to the final measurement value and the positional state variance is also minimal at 0.05

2. The Extended Kalman filter — using Radar Data

Radar data poses a slightly more difficult challenge. Radar data is returned in Polar co-ordinates. Radar data consists of 3 components i.e.
– ρ or Range (distance from the origin)
– ϕ or bearing (the angle between ρ and x), and
– ρ˙which is the range rate.

As there is no H matrix that will map the state vector into the radar measurement space, we need a new function h(x) that will map the state space into the measurement space for the measurement update step. This function is derived by mapping the polar cordinates into the cartesian space and is defined as:

This mapping introduces a non-linearlity which would invalidate the assumptions of the kalman filter that the process and measurement models are Gaussian. The extended kalman filter approximates the nonlinear model by a local linear model and then applies the Kalman filter to this approximation. This local linear approximation is obtained by computing a first order Taylor expansion around the current estimate. The first order approximations are also called the Jacobian Matrix. The derivations of the Jacoboians are a bit involved and we will not be covering these here. However, these are well documented on several internet resources on the topic, but if you want to use these straight off the cuff then you can refer to the implementation code in the github reference below:

Implementation reference:

You can find the code for a C++ impementation of the Kalman filter in the github repository:


So far we have covered some of the fundamental algorithms used in sensor fusion for object tracking. In the next part of this blog post we will look at the Unscented Kalman filter which overcomes the need to use an approximation for the projection. We will also look at a more recent and increasingly popular technique called Particle filters based on Monte Carlo Integration.

Ness Digital Engineering works with leading automotive and data technology companies in the areas of automotive safety product engineering, large scale fleet data solutions, advanced driver assistance solutions and location based services. Contact us to learn more.

About the Author

Wilbur Desouza Wilbur Desouza
Wilbur Desouza is the Vice President - Solutioning at Ness and works from London.