Belief States
Belief states represent the agent’s uncertainty about the true state of the environment. POMDPPlanners provides flexible belief representations suitable for different problem types.
Belief Representations
Particle Filter Beliefs
Utility Functions
Base Belief Interface
All belief representations inherit from the base Belief class:
- class POMDPPlanners.core.belief.Belief[source]
Bases:
ABCAbstract base class for POMDP belief state representations.
This class defines the interface for belief states in POMDP environments. Belief states represent probability distributions over the state space, capturing the agent’s uncertainty about the current state.
Note
This is an abstract base class and cannot be instantiated directly. Subclasses must implement the update() and sample() methods.
- classmethod from_config(config)[source]
Create a belief instance from configuration.
Factory method that dynamically creates belief instances based on configuration objects specifying the class name and parameters.
- Parameters:
config – Configuration object with class_name and params attributes
- Returns:
New belief instance of the specified type
- Raises:
ValueError – If the specified belief class is not found
- inplace_update(action, observation, pomdp, state=None)[source]
- Return type:
- Parameters:
action (Any)
observation (Any)
pomdp (Environment)
state (Any | None)
- abstractmethod sample()[source]
Sample a state from the current belief distribution.
- Return type:
- Returns:
A state sampled according to the belief’s probability distribution
Note
Subclasses must implement this method to enable state sampling for planning and simulation purposes.
- abstractmethod update(action, observation, pomdp, state=None)[source]
Update belief given an action-observation pair.
Performs Bayesian belief update using the environment’s transition and observation models.
- Parameters:
action (
Any) – Action that was executedobservation (
Any) – Observation that was receivedpomdp (
Environment) – Environment providing transition and observation modelsstate (Any | None)
- Return type:
- Returns:
Updated belief state reflecting the new information
Note
Subclasses must implement this method according to their specific belief representation and update strategy.
Particle Filter Beliefs
- WeightedParticleBelief
Maintains particles with associated weights
Efficient for complex observation models
Supports importance sampling
Handles continuous state spaces well
- UnweightedParticleBelief
All particles have equal weight
Simpler implementation
Good for uniform beliefs
Faster sampling operations
Belief Operations
Sampling from Beliefs
from POMDPPlanners.core.belief import WeightedParticleBelief
import numpy as np
# Create belief with weighted particles
states = [0, 1, 2]
particles = [0, 0, 1, 1, 2]
weights = [0.3, 0.2, 0.2, 0.2, 0.1]
belief = WeightedParticleBelief(
particles=particles,
weights=weights,
state_space=states
)
# Sample from belief
state_sample = belief.sample()
print(f"Sampled state: {state_sample}")
# Get state probabilities
probabilities = belief.get_state_probabilities()
print(f"State probabilities: {probabilities}")
Creating Initial Beliefs
from POMDPPlanners.core.belief import get_initial_belief
from POMDPPlanners.environments.tiger_pomdp import TigerPOMDP
env = TigerPOMDP()
# Create uniform initial belief
belief = get_initial_belief(env, n_particles=1000)
# Sample from initial belief
initial_state = belief.sample()
Belief Updates
Beliefs are updated based on actions and observations:
# After taking action and receiving observation
action = "listen"
observation = "hear_left"
# Update belief (typically done by planner)
updated_belief = planner.update_belief(
current_belief=belief,
action=action,
observation=observation
)
Advanced Belief Operations
State Probability Queries
# Get probability of specific state
prob_tiger_left = belief.get_state_probabilities()["tiger_left"]
# Check if belief is concentrated
max_prob = max(belief.get_state_probabilities().values())
is_concentrated = max_prob > 0.8
Effective Sample Size
# For weighted particle beliefs
if hasattr(belief, 'effective_sample_size'):
eff_size = belief.effective_sample_size()
if eff_size < 100: # Threshold for resampling
print("Consider particle resampling")
Belief Entropy
import numpy as np
probs = list(belief.get_state_probabilities().values())
entropy = -sum(p * np.log(p) for p in probs if p > 0)
print(f"Belief entropy: {entropy:.3f}")
Working with Continuous States
For continuous state spaces, particles represent state samples:
from POMDPPlanners.environments.cartpole_pomdp import CartPolePOMDP
import numpy as np
env = CartPolePOMDP()
# Create belief with continuous state particles
particles = [
np.array([0.1, 0.0, 0.05, 0.0]), # [position, velocity, angle, angular_velocity]
np.array([0.0, 0.1, -0.02, 0.1]),
np.array([-0.05, -0.05, 0.0, -0.05])
]
belief = WeightedParticleBelief(
particles=particles,
weights=[0.4, 0.3, 0.3],
state_space=None # Continuous space
)
# Sample continuous state
continuous_state = belief.sample()
print(f"Sampled state: {continuous_state}")
Custom Belief Implementations
To create custom belief representations:
from POMDPPlanners.core.belief import Belief
class GaussianBelief(Belief):
def __init__(self, mean, covariance):
self.mean = mean
self.covariance = covariance
def sample(self):
return np.random.multivariate_normal(self.mean, self.covariance)
def get_state_probabilities(self):
# For continuous beliefs, this might return density estimates
# or discretized approximations
pass
Performance Considerations
- Particle Count
More particles → better approximation, slower computation
Typical range: 100-10,000 particles
Adjust based on problem complexity
- Resampling
Monitor effective sample size
Resample when weights become too uneven
Use systematic resampling for efficiency
- Memory Usage
Particle beliefs scale with particle count
Consider state compression for large states
Use appropriate data types (float32 vs float64)
See Also
../examples/beliefs - Belief usage examples
Planners - How planners use beliefs
../api/core - Complete API reference