Markov Chains and Hidden Markov Models (lib601.markov)

Tools for working with Markov Chains and Hidden Markov Models

class lib601.markov.HMM(initial_distribution, transition_model, observation_model)

Class for specifying a hidden Markov model

Parameters:
  • initial_distribution\(\Pr(S_0)\), the initial distribution over states, represented as a lib601.dist.DDist
  • transition_model\(\Pr(S_{t+1}~|~S_t)\) represented as a procedure that takes an old state and returns a lib601.dist.DDist over new states.
  • observation_model\(\Pr(O_t~|~S_t)\) represented as a procedure that takes a state and returns a lib601.dist.DDist over observations that can be made if the system is in that state.
Variables:
  • initial_distribution – The value passed in as initial_distribution
  • transition_model – The value passed in as transition_model
  • observation_model – The value passed in as observation_model
make_simulator()
Returns:An instance of HMMSimulator for simulating the behavior of this HMM
make_state_estimator()
Returns:An instance of StateEstimator for performing Bayesian State Estimation using the models described by this HMM
class lib601.markov.HMMSimulator(hmm)

Class for simulating the behavior of some system whose behavior is described by a Hidden Markov Model.

initialize()

Randomly selects and element from C{initial_distribution} and stores the result in the instance variable L{state}. Returns C{None}.

make_observation()

Returns an observation randomly drawn from P(O_t | S_t), where S_t is the internal state of the system (stored in the instance variable L{state}).

transition()

Updates the instance variable L{state} by randomly selecting an element from P(S_(t+1) | S_(t)). Returns C{None}.

state = None

Holds the “current” state of the system (will have value C{None} until L{initialize} is called, and will be updated by L{transition}).

class lib601.markov.MarkovChain(initial_distribution, transition_model)

Class for describing a Markov chain and computing some useful properties of it.

Parameters:
  • initial_distribution\(\Pr(S_0)\), the initial distribution over states, represented as an instance of lib601.dist.DDist
  • transition_model\(\Pr(S_{t+1}~|~S_t)\) represented as a procedure that takes an old state and returns a lib601.dist.DDist over new states.
Variables:
  • initial_distribution – The value passed in as initial_distribution
  • transition_model – The value passed in as transition_model
make_simulator()
Returns:An instance of MarkovChainSimulator for simulating the behavior of this Markov Chain
occupation_dist(T)

Returns a lib601.dist.DDist over states at time T

Parameters:T – time step
sample_sequence(T)

Returns a sequence of length T, drawn from the distribution over sequences

state_sequence_prob(seq)

Returns the probability of a sequence of states in this Markov Chain

Parameters:seq – list of states \([s_0, s_1, \ldots, s_T]\)
class lib601.markov.MarkovChainSimulator(mc)

Class for simulating the evolution of a system described by a Markov chain

Parameters:

mc – The instance of MarkovChain to be simulated.

Variables:
  • mc – The instance of MarkovChain that was passed in as mc.
  • state – Holds the “current” state of the system (will have value None until initialize() is called, and will be updated by transition()).
initialize()

Randomly selects and element from self.mc.initial_distribution and stores the result in the instance variable state. Returns None.

transition()

Updates the instance variable state by randomly selecting an element from \(\Pr(S_{t+1}~|~S_t=\mathtt{self.state})\). Also returns the new state.

class lib601.markov.StateEstimator(hmm)

Class for estimating the state of some underlying system whose behavior is described by a Hidden Markov Model.

Parameters:

hmm – An instance of HMM describing the underlying system

Variables:
observe(obs)

Update the belief based on making an observation. Changes belief but does not return anything.

Parameters:obs – The observation that was made
Returns:None
reset()

Reset the estimator so that the instance variable belief is equal to the initial belief.

transition()

Update the belief following a transition. Changes belief but does not return anything.

Returns:None
update(obs)

Update the belief based on making an observation and then transitioning. Changes belief but does not return anything. Same effect as transition() followed by observe(), but more efficient.

Parameters:obs – The observation that was made
Returns:None