State-Estimation-of-Robotics

Book Recording for State Estimation of Robotics

  • 什么是状态估计?
    1. 根据系统的先验(prior)模型和测量序列,对系统内在重构,估计后验(posterior)结果的问题
    2. 其实就是理解传感器测量本质的过程
  • 传感器
    1. 以一定精度测量物理变换量
    2. 状态估计问题:找到最好的方式,利用传感器执行测量

Part I Estimation Machinery

Chapter One Primer on Probability Theory(就是带你复习一下概率论的知识)

2.1 Probability Density Functions(PDFs, 中文里的概率密度函数)

  • Simple Definitions
  1. Axiom of total probability

伟大的全概率定理(axiom of total probability)

\begin{equation} \int_a^b p(x)dx =1 \end{equation}

interval = [c, d], Pr([c, d]) is

\begin{equation} Pr(c \leq x \leq d) = \int_c^d p(x)dx \end{equation}

  1. Conditioning Probability Densities (named by James.G)

While we have a conditioning var, x|y, p(x|y) represents the event x happening under the event of y which is called 条件概率.Also, p(x|y) satisfies the axiom of total probability.

abp(xy)dx=1\int_a^b p(x|y)dx=1

  1. Joint Probability Densities

While x Rn\in \mathbb{R}^n, p(x) is called 联合概率密度分布(joint probability densities).

\begin{equation} p(x) , x=\{ x_1, x_2, \dots, x_n \} \end{equation}

which also satisfy the axiom of total probability,

\begin{equation} \int\cdots\int p(x_1, \cdots, x_n)dx_1\cdots dx_n =1 \end{equation}

Where, p(x)p(x) is called 可能性/似然(likehood) under the condition of x representing some state of system.

  1. Basyes’ Rule and Inference(推理)

Since we have introduced the conditioning probabilities density(well, its so long, why not give it a snack name, called pd), the famous fomulation of Basyes’ rule is

p(xy)=p(yx)p(x)p(y)p(x|y) = \frac{p(y|x)p(x)}{p(y)}

where, the fomula comes from

p(x,y)=p(xy)p(y)=p(yx)p(x)p(x, y) = p(x|y)p(y) = p(y|x)p(x)

Give meaning of this simple nation,

x = 状态,y = 传感器读数,p(y|x) = 传感器模型,p(x|y) = 状态估计

In Basyes’ rule, it’s hard to compute p(y)

\begin{equation} \begin{aligned} p(y) & = p(y)\int p(x|y)dx = \int p(x|y)p(y)dx \\ & = \int p(x, y)dx \\ & = \int p(y|x)p(x)dx \end{aligned} \end{equation}\nonumber

Because p(y) represents the all distribution of y. For example, to estimate the probability of the parts of an image where cat is(pixel is so dense).

  1. Moments(矩) of PDFs

Moments are used to describe the property of some distribution.

  • The zeroth probability moment is 1(according to the axiom of total probability, aotp).

  • The first probability moment is μ\mu which is known as mean:

μ=E[x]=xp(x)dx\mu = E[x] = \int x\cdot p(x)dx

  • The second probability moment is known as covariance matrix Σ\Sigma

Σ=E[(xμ)T(xμ)]\Sigma = E[(x - \mu)^T(x-\mu)]

  • When sample from p(x), we denote as:

    xmeasp(x)x_{meas} \overleftarrow{}p(x)

    its mean

    μmeas=1Nxi,meas\mu_{meas} = \frac{1}{N}\sum x_{i, meas}

    its covariance

    Σmeas=1N1(xi,measμmeas)T(xi,measμmeas)\Sigma_{meas} = \frac{1}{N-1}\sum(x_{i, meas}-\mu_{meas})^T(x_{i, meas}-\mu_{meas})

    where, N - 1 is referred to as Bessel’s correction.
  1. Statistically Independent, Uncorrelated
  • statistically independent:

    p(x,y)=p(x)p(y)p(x,y) = p(x)p(y)

  • uncorrelated:

    E[xyT]=E[x]E[y]TE[xy^T] = E[x]E[y]^T

In gaussian distribution, statistically independent is equivalent to uncorrlated.

We can get uncorrlated from statistically independent, while it cant get in contrast.

独立可以推出不相关,但不相关不可推出独立

  1. Normalized product(归一化积)

融合两次独立测量结果

p(x)=ηp1(x)p2(x)p(x) = \eta p_1(x)p_2(x)

where, η\eta is called normlaized product.作用是保证

1=p(x)dx=ηp1(x)p2(x)dx1 = \int p(x)dx=\eta\int p_1(x)p_2(x)dx

  1. Shannon Information and Mutual Information
  • Shannon Information: 刻画某个随机变量的不确定性(也叫负熵,描述分布的混乱程度)

    H(x)=E[ln(p(x))]=p(x)ln(p(x))dxH(x) = -E[ln(p(x))] = - \int p(x)ln(p(x))dx

  • Mutual Information: 两个随机变量的不确定性

    I(x,y)=E[ln(p(x,y)p(x)p(y))]=p(x,y)ln(p(x,y)p(x)p(y))dxdyI(x,y) = E[ln(\frac{p(x,y)}{p(x)p(y)})] = \int\int p(x,y) ln(\frac{p(x,y)}{p(x)p(y)})dxdy

  • 当两个随机变量互相独立,也可以发现

    I(x,y)=H(x)+H(y)H(x,y)=0I(x,y) = H(x) + H(y) - H(x,y) = 0

Chapter Two Gaussian Probability Density Functions

多维高斯分布