optimization – Why does a kalman filter work better in a simulation than in real life?

I am trying to use a classical
Kalman filter
for getting indoor location. I am using geolocation API of HTML to get
latitude, longitude, position accuracy and speed. I am also using the
smartphone’s accelerometer to get acceleration. The Kalman filter
combines this sensor data. I tried making a simulation in python with
gaussian error comparable to actual measurement. The GPS measurement
has accuracy of around 20 meters.

In simulation using Kalman filter I was getting consistent accuracy of
1-2 metres for 10000 iterations. When I tried applying it in real life
the location just stays at the same place. There is almost no movement
when I move from the original location.

My question is how can I make the Kalman filter respond to the movement?

It should at least follow the direction even if absolute location may not
be accurate initially.

Here is the python snippet:

• `vx, vy` are velocity,
• `ax, ay` are acceleration, and
• `kx,ky` are the output position of the Kalman filter.
• `numpy.eye(n)` is just $$I_n$$, the identity matrix.
``````class KalmanFilterLinear:
def __init__(self, _A, _B, _H, _x, _P, _Q, _R):
self.A = _A  # State transition matrix.
self.B = _B  # Control matrix.
self.H = _H  # Observation matrix.
self.current_state_estimate = _x  # Initial state estimate.
self.current_prob_estimate = _P  # Initial covariance estimate.
self.Q = _Q  # Estimated error in process.
self.R = _R  # Estimated error in measurements.

def GetCurrentState(self):
return self.current_state_estimate

def Step(self, control_vector, measurement_vector, accuracymatrix):
# ---------------------------Prediction step-----------------------------
predicted_state_estimate = (
self.A * self.current_state_estimate + self.B * control_vector
)
predicted_prob_estimate = (
self.A * self.current_prob_estimate
) * numpy.transpose(self.A) + self.Q
# --------------------------Observation step-----------------------------
innovation = measurement_vector - self.H * predicted_state_estimate
innovation_covariance = (
self.H * predicted_prob_estimate * numpy.transpose(self.H) + accuracymatrix
)
# -----------------------------Update step-------------------------------
kalman_gain = (
predicted_prob_estimate
* numpy.transpose(self.H)
* numpy.linalg.inv(innovation_covariance)
)
self.current_state_estimate = (
predicted_state_estimate + kalman_gain * innovation
)
# We need the size of the matrix so we can make an identity matrix.
size = self.current_prob_estimate.shape(0)
# eye(n) = nxn identity matrix.
self.current_prob_estimate = (
numpy.eye(size) - kalman_gain * self.H
) * predicted_prob_estimate
# -------Creating the Kalman filter--------------------------
timeslice = 0.0043
state_transition = numpy.matrix(
((1, timeslice, 0, 0), (0, 1, 0, 0), (0, 0, 1, timeslice), (0, 0, 0, 1))
)
control_matrix = numpy.matrix(
((1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1))
)
observation_matrix = numpy.eye(4)
initial_state = numpy.matrix(((20.0), (0.0), (20.0), (0.0)))
initial_probability = numpy.eye(4)
process_covariance = numpy.matrix(
(
(timeslice ** 4 / 4, 0, timeslice ** 3 / 2, 0),
(timeslice ** 3 / 2, 0, timeslice ** 2, 0),
(0, timeslice ** 4 / 4, 0, timeslice ** 3 / 2),
(0, timeslice ** 3 / 2, 0, timeslice ** 2),
)
)
measurement_covariance = numpy.eye(4) * numpy.array((20, 0.2, 20, 0.2))

kx = 0.0
ky = 0.0

kf = KalmanFilterLinear(
state_transition,
control_matrix,
observation_matrix,
initial_state,
initial_probability,
process_covariance,
measurement_covariance,
)

# --------updating the kalman filter----------------------

control_vector = numpy.matrix(
(
(0.5 * ax * timeslice * timeslice),
(ax * timeslice),
(0.5 * ay * timeslice * timeslice),
(ay * timeslice),
)
)
accuracymatrix = numpy.eye(4) * numpy.array((accuracy, accuracy, accuracy, accuracy))

kf.Step(control_vector, numpy.matrix(((x), (vx), (y), (vy))), accuracymatrix)

kx = kf.GetCurrentState()(0, 0)
ky = kf.GetCurrentState()(2, 0)
``````