Log in

STATE-BASED MOBILITY MODEL (SMM)

A mobility model, in the context of location management, understands the daily
movements of a user , and describes this understanding. Thus, various mobility models have been developed for mobile computing environments.

Mobility modeling in MOD is difficult because the higher location resolution compare to PCS. Moreover, a
matter of concern in MOD is not a logical/symbolic location, like cell-id, but the very
physical/geographical location of a moving object obtained by a location-sensing device,
such as GPS. As noted above, it is necessary to consider a complex movement containing
both random and linear movement patterns.


Basic Definitions

Man propose the state-based mobility model (SMM) that understands a complex mobility pattern as a set of simple movement components using a finite state Markov chain based on the classification scheme 

Definition 1 A movement state si is a 3-tuple (vmin, vmax , φ), where vmin and vmax
are the minimum and maximum speeds of a moving object, respectively. φ is a function of
movement and is either probabilistic or non-probabilistic.

S is a finite set of movement  states (called the state space).

Definition 2 The state-based mobility model (SMM) describes user mobility patterns
using a finite state Markov Chain {staten}, where staten denotes the movement state at
step n, staten ∈ S. Furthermore, the chain can be described completely by its transition  probability as
pij ≡ Pr{staten+1 = j | staten = i} for all i, j ∈ S. (1)


These probabilities can be grouped together into a transition matrix as follows:  P ≡ (pij )i,j∈S. (2)

An important question is why such a general mobility model is not as popular as the
restrictive models so abundant in the literature. The most important reason is that the
generalized model has nothing to be assumed to start the analysis.

In the SMM model, we also assume that a moving object has a tendency to remain in the
same state rather than to switch states.

This is generally called temporal locality.
The self-transition probability vector (STPV) is obtained by taking all of the
self-transition probabilities, which is equivalent to an 1 × |S| matrix.


Comments

  • jaguar1637 2373 days ago

    Determining the Transition Matrix

    One of the most important tasks in the SMM model is to determine the transition
    matrix P. The matrix can be determined using the user profile, the spatio-temporal data
    mining process, or in an ad hoc manner. The simplest way to determine the transition
    matrix P is to set pij S  = 1 | | for all i, j ∈ S.

    A more reasonable solution is to use statistical  techniques to infer the values of the transition probabilities empirically based on past data .

    For example, suppose the optimal state for each time unit is a Markov chain  having state space S0. Furthermore, assuming that the optimal state for 36 time units has been  PPPLLLLRRLPLLLRRRPPPRLLRLLLLRPPRRRRP.

     

  • jaguar1637 2373 days ago

    Many tracking problems lie outside of the traditional situation of linear (or linearizable) measurements and dynamics addressed by the Kalman filter and its variants.

    This is especially true in applications where target measurements are highly ambiguous and visibility is affected by unpredictable phenomena such as intermittent interference and low signal-to-noise ratios. Bayesian tracking provides the general solution to this more general class of problems.

    Conceptually, Bayesian tracking is straightforward: given the target measurements, apply Bayes' rule to compute the probability density of the target location at any given time, all the while assuming a target motion model. Bayesian trackers are computationally expensive; there are two basic approaches to their implementation: sequential Monte Carlo methods such as particle filtering, and deterministic methods that compute the target density directly.

    A new approach to the direct method that is theoretically and computationally novel in several ways.

    First, recent results from the theory of adaptive moving meshes are modified and applied, an approach that is distinctly different from previously published direct methods that use fixed meshes to solve the Fokker-Planck equation.

    Second, a straight-line motion model based on a Markov jump process for the velocity is assumed. Straight-line motion punctuated by jumps in target velocity may be a more suitable assumption for some target dynamics than the traditional random walk assumed in the Kalman filter and many Bayesian trackers.

    The resulting linear partial differential equation that describes the target position density is relatively easy to solve numerically, especially compared to the Fokker- Planck equation that results from the random walk motion assumption.

    The proposed Bayesian tracking algorithm is a promising alternative to competing methods. It is also shown that, like particle filters, the adaptive mesh approach necessarily suffers f- - rom the curse of dimensionality. Simulation results are shown using the example of bistatic radar.

  • jaguar1637 2373 days ago

    How to compute a matrix position (based here on 36 previous bars)

    The key in this way of thinking is how to compute the matrix transition. ( I was looking for an anwser since a long time ago, and finally I got it !)

    I take this example for helping you  =>

    • U = UP(LONG)
    • D = Down (Short)
    • R = Ranging (Sleep or Low Spread) 

    Counting the number of transitions Nij 2 from state i to state j gives                 

     

    UP

    Down

    Ranging (sleep)

    UP

    5

    2

    2

    Down

    1

    9

    4

    Ranging (Sleep)

    3

    3

    6

     

    ¨Pij =

     

    UP

    Down

    Ranging (sleep)

    UP

    5/(5+2+2)

    2/(5+2+2)

    2/(5+2+2)

    Down

    1/(1+9+4)

    3/(1+9+4)

    4/(/(1+9+4)

    Ranging (Sleep)

    3/(3+3+6)

    3/(3+3+6)

    6/(3+3+6)

     

     

    UP

    Down

    Ranging (sleep)

    UP

    0.5556

    0.2222

    0.2222

    Down

    0.0714

    0.6429

    0.2857

    Ranging (Sleep)

    0.25

    .025

    .025

    At first glance, I can see

    • Down of Down = 0.6429 (the highest)
    • Up of Up = 0.5556
    • R of R = 0.25

    Solving for pi = p * T

    • Pi U = 0.257
    • Pi D = 0.4
    • Pi R= 0.343 

    => The result implies to take a Short position

     

     

  • jaguar1637 2373 days ago

    I forgot to send the signals provided by the 36 last bars, regarding the previous explanation

    UU DDDD RR DU DDD RRR UUU R DD R DDDD R UU RRRRR U

  • jaguar1637 2362 days ago

    So, the operation orders (Up, Down , and Range (nope)) can be put inside a mtraix

     

     

    UP

    Down

    Ranging (sleep)

    UP

    5

    2

    2

    Down

    1

    9

    4

    Ranging (Sleep)

    3

    3

    6


  • jaguar1637 2362 days ago

    Counting the number of transitions Nij 2 from state i to state j gives the matrix

    I think this is a transition matrix ?

  • jaguar1637 2362 days ago

    In  mathematics, a stochastic matrix is also named as  probability matrixtransition matrixsubstitution matrix, or Markov matrix ! 

    http://en.wikipedia.org/wiki/Stochastic_matrix

    And much better : A stochastic matrix describes a Markov chain \boldsymbol{X}_{t} over a finite state space S.

    This is connected to transition probabilities => the http://en.wikipedia.org/wiki/Transition_probabilities

    Also, check this http://en.wikipedia.org/wiki/State-transition_matrix

  • jaguar1637 2362 days ago

    For people interested into Markov Chains, and matrix calculations, there is this link

    http://academic.uprm.edu/wrolke/esma6600/mark1.htm

  • jaguar1637 2362 days ago

    the mQ4/R code could be this one,

    The following R/mq4 code will do:

    make.Markov_Chains = function (x)

    {
       N=max(x)
      res=array(dim=c(N,N))


       for(i in 1:max(x))

        {
          index=x[1:(length(x)-1)]==i  
          S=sum(index)

           for(j in 1:N)

                   res[i,j]=sum( x[2:length(x)][index]==j)/S
         }
         return(res)
    }

  • jaguar1637 2362 days ago

    I check this formula

    http://latex.codecogs.com/gif.latex?P1v(i,j)=\frac{{\sum_{u=1}^{^{S_{u}-1}}\sum_{v=1}^{S_{v}-1}\delta%20({F_{h}(u,v)=i,{F_{h}(u,v+1)=j}})}}{\sum_{u=1}^{{S_{u}-1}}\sum_{v=1}^{S_{v}-1}\delta%20(F_{h}(u,v)=i)}

  • JohnLast 2360 days ago

    So you really want to build something based on Markov transitional matrix? I begin slowly to understand it.

    Maybe it would be a good idea to try to make something extremely simple first.

  • jaguar1637 2359 days ago

    yes, this is exactly what I want, they used a transition matrix to fetch the next movment

  • jaguar1637 2359 days ago

    Better, this transition Markov matrix will be reused in NN eas or w/ perceptrons

  • jaguar1637 2358 days ago

    What is a markov chain ?

    Suppose X1, X2, ... , Xn is a sequence of integer-valued independent random variables. Define
    Y1=X1,
    Yn + cY(n-1) = Xn,

    where c is a positive integer.  {Yn, n>=1} is a Markov Chain

    First, the raw data are a time series of cathegories corresponding to the rows (or columns) in the transition matrix. For each cathegory, we fill in the corresponding row with the empirical distribution of the successor. Up Down Range (means nope)

  • jaguar1637 2356 days ago

    I just asked  a mathematics teacher to privide me the code in C

  • JohnLast 2356 days ago

    Nice!