20 2. BackgroundThis is not an intuitive Decision Rule any more, but probability models for p( x| s)and p( s) can be learned from the environment. The prior p( s) is a discrete PDF,due to the discrete nature of the states3, that can be determined by simple countingof the occurrences of s.Modelling the state-conditional distribution p( x| s) is more complicated. The vectorspace of x, the numerical representation of the evidence, can have manifold shapes.It can be of categorical nature leading to a discrete PDF, or, if it represents physicalmeasurements it will become a continuous one. The analytical choice of the formof the PDF, whether it is a multi-variate Gaussian, a mixture density or anothercomplex distribution, can be termed: to apply model assumptions. If the prioranalysis has lead to a p( x| s) that is modelled according to the true nature of theenvironment, the free parameters of the model need to be determined. Similar tolearning the structure of the prior p( s), the parameters of p( x| s) can be learnedfrom the environment. But due to the coupling of s and x, special state-annotatedevidence-data is needed. Ignoring the problem of gathering this data, parameterestimation techniques like Maximum-Likelihood can then be applied to the set oftraining samples. If p( x| s) is modelled as a Gaussian, this results in estimating themean and the variance.Summarizing the results: The Bayes Decision Rule conducts a search for the states with the maximum posterior probability p( s| x) for an observed feature vector x.The Bayes Decision Rule is therefore a function with input given by some measuredevidence x leading to the output of the most probable state s of the environment.Instead of directly evaluating the posterior probability p( s| x), the prior p( s) andstate-conditional p( x| s) are employed as they can be learned from the environment.p( s) can be easily learned by counting. And for p( x| s), suitable model assumptionsmust be chosen and the free parameters of the model need to be trained.If these concepts are applied to the positioning problem with RSSI measurements,this leads to the following example model:1. A state s is an enumerable region of space, a location.2. The feature vector x is the jointly received vector of RSSI values for differentAPs.3. p( s) is the probability to be in a specific location. In a geographically-restrictedmobility-model, p( s) would be zero for unreachable regions.4. p( x| s) is the probability to receive the measurements x at the location s. p( x| s)can be modelled as a multi-variate Gaussian, with a mean vector that repre-sents the anticipated AP -specific RSSI values at the location s. Assumingequal noise over all APs, a signal variance of around 5 dBm will be chosen.5. The AP -specific means of p( x| s) will be obtained from a radio propagationmodel.3It is possible to assume a continuous state space as well, but this has not been done forsimplicity.
Diplomarbeit
Indoor Localization of Mobile Devices Based on Wi-Fi Signals Using Raytracing Supported Algorithms
Einzelbild herunterladen
verfügbare Breiten