Probabilistic Robots
1. Uncertainty
After calibration the robot should on average return to the desired location, but scatter due to uncontrollable factors. These are zero mean errors, occurring incrementally, the size of the distribution growing with distance travelled. This can be modelled as a *gaussian distribution.
In reality, we have uncertain terms
Simple sensing / actions procedures are locally effective but limited in complex problems in the real-world. We can build incrementally updating probabilistic models to estimate the position of our robot on the map.
Every action & sensor measurement is uncertain. When estimating a robot's state, we use these, so the state estimate is also uncertain. Usually, we take an uncertain measurement and then take action to receive new information. This then updates the estimate.
2. Probabilistic Inference
Prior knowledge combined with new measurements is generally modelled as a Bayesian Network - a series of weighted combinations of old & new information.
Sensor fusion is the general process of combining multiple uncertain measurements to produce a better estimate.
2.1 Bayesian Probabilistic Inference
Baysian is the measure of subjective belief. Probabilties describe our state of knowledge. The Bayes' Rule relates probabilities of discrete statements:
Here,
We use Bayes' rule to incrementally digest new information from sensors about a robot's state.
2.2 Probability Distributions
Discrete probabilistic inference generalizes to large numbers of possible states, so we can use a continuous probability density function:
A guassian distribution often represents uncertainty in measurements well
The prior (wide Gaussian) is updated with a likelihood (narrow Gaussian) to produce a posterior (narrower Gaussian).
2.3 Particles
Here, a probability distirbution is represented by a finite set of weighted samples of the state