POMDPPlanners.tests.test_core package
Subpackages
- POMDPPlanners.tests.test_core.test_belief package
- Submodules
- POMDPPlanners.tests.test_core.test_belief.belief_equivalence_utils module
assert_chained_update_equivalence()assert_normalized_weights_match()assert_sample_distributions_match()assert_update_equivalence()assert_update_equivalence_per_particle_seeded()assert_update_particles_match()assert_update_particles_match_per_particle_seeded()assert_update_top_k_ranking_agrees()assert_update_weights_match()
- POMDPPlanners.tests.test_core.test_belief.test_base module
- POMDPPlanners.tests.test_core.test_belief.test_belief_environment_integration module
- POMDPPlanners.tests.test_core.test_belief.test_belief_utils module
- POMDPPlanners.tests.test_core.test_belief.test_gaussian_belief module
- POMDPPlanners.tests.test_core.test_belief.test_gaussian_belief_updaters module
TestExtendedKalmanFilterUpdaterTestExtendedKalmanFilterUpdater.test_config_id_deterministic()TestExtendedKalmanFilterUpdater.test_config_id_sensitive_to_parameters()TestExtendedKalmanFilterUpdater.test_ekf_covariance_is_symmetric()TestExtendedKalmanFilterUpdater.test_integration_with_gaussian_belief()TestExtendedKalmanFilterUpdater.test_linear_system_matches_kf()TestExtendedKalmanFilterUpdater.test_nonlinear_system_reduces_covariance()
TestLinearKalmanFilterUpdaterTestLinearKalmanFilterUpdater.test_1d_analytical_values()TestLinearKalmanFilterUpdater.test_1d_static_system()TestLinearKalmanFilterUpdater.test_2d_tracking_covariance_decreases()TestLinearKalmanFilterUpdater.test_config_id_deterministic()TestLinearKalmanFilterUpdater.test_config_id_sensitive_to_parameters()TestLinearKalmanFilterUpdater.test_integration_with_gaussian_belief()TestLinearKalmanFilterUpdater.test_with_control_input()
TestUnscentedKalmanFilterUpdaterTestUnscentedKalmanFilterUpdater.test_config_id_deterministic()TestUnscentedKalmanFilterUpdater.test_config_id_sensitive_to_parameters()TestUnscentedKalmanFilterUpdater.test_higher_dimensional_system()TestUnscentedKalmanFilterUpdater.test_integration_with_gaussian_belief()TestUnscentedKalmanFilterUpdater.test_linear_system_matches_kf()TestUnscentedKalmanFilterUpdater.test_nonlinear_system_reduces_covariance()TestUnscentedKalmanFilterUpdater.test_sigma_point_scaling_parameters()TestUnscentedKalmanFilterUpdater.test_ukf_covariance_is_symmetric()TestUnscentedKalmanFilterUpdater.test_ukf_matches_ekf_on_linear_system()
- POMDPPlanners.tests.test_core.test_belief.test_gaussian_mixture_belief module
- POMDPPlanners.tests.test_core.test_belief.test_particle_beliefs module
- POMDPPlanners.tests.test_core.test_belief.test_vectorized_weighted_particle_belief module
- POMDPPlanners.tests.test_core.test_belief.vectorized_updater_test_utils module
Submodules
POMDPPlanners.tests.test_core.test_cost module
POMDPPlanners.tests.test_core.test_distributions module
POMDPPlanners.tests.test_core.test_environment module
POMDPPlanners.tests.test_core.test_hyper_parameter_tuning module
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestCategoricalHyperParameterIdUniqueness[source]
Bases:
objectTest uniqueness of CategoricalHyperParameter.id() method.
- test_complex_choices_produce_unique_ids()[source]
Test that hyperparameters with complex data types produce unique IDs.
- test_different_choices_produce_different_ids()[source]
Test that hyperparameters with different choices produce different IDs.
- test_different_names_produce_different_ids()[source]
Test that hyperparameters with different names produce different IDs.
- test_different_order_choices_produce_different_ids()[source]
Test that hyperparameters with different order of choices produce different IDs.
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestHyperParamPlannerConfigIdUniqueness[source]
Bases:
objectTest uniqueness of HyperParamPlannerConfig.config_id property.
- test_different_constant_parameters_produce_different_ids()[source]
Test that configs with different constant parameters produce different IDs.
- test_different_hyper_parameter_order_produces_same_id()[source]
Test that different order of hyper parameters produces same ID (due to sorting).
- test_different_hyper_parameters_produce_different_ids()[source]
Test that configs with different hyper parameters produce different IDs.
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestHyperParamPlannerConfigValidation[source]
Bases:
objectTest validation of HyperParamPlannerConfig inputs.
- test_invalid_constant_parameter_name_raises_value_error()[source]
Test that constant parameter names not in policy constructor raise ValueError.
Purpose: Validates that all constant parameter names correspond to policy class constructor parameters
Given: A policy class with specific constructor parameters When: HyperParamPlannerConfig is created with constant parameter name not in constructor Then: ValueError is raised listing valid parameter names
Test type: unit
- test_invalid_constant_parameters_type_raises_type_error()[source]
Test that non-dict constant_parameters raises TypeError.
Purpose: Validates that constant_parameters must be a dict
Given: An attempt to create HyperParamPlannerConfig with non-dict constant_parameters When: The config is instantiated with a list instead of a dict Then: TypeError is raised with descriptive message
Test type: unit
- test_invalid_hyper_parameter_element_type_raises_type_error()[source]
Test that invalid hyperparameter element types raise TypeError.
Purpose: Validates that all elements in hyper_parameters are valid HyperParameterFeature types
Given: An attempt to create HyperParamPlannerConfig with invalid hyperparameter element When: The config is instantiated with a string in the hyper_parameters list Then: TypeError is raised indicating the invalid element index and type
Test type: unit
- test_invalid_hyper_parameters_type_raises_type_error()[source]
Test that non-sequence hyper_parameters raises TypeError.
Purpose: Validates that hyper_parameters must be a Sequence (list or tuple)
Given: An attempt to create HyperParamPlannerConfig with non-sequence hyper_parameters When: The config is instantiated with a dict instead of a list/tuple Then: TypeError is raised with descriptive message
Test type: unit
- test_invalid_hyperparameter_name_raises_value_error()[source]
Test that hyperparameter names not in policy constructor raise ValueError.
Purpose: Validates that all hyperparameter names correspond to policy class constructor parameters
Given: A policy class with specific constructor parameters When: HyperParamPlannerConfig is created with hyperparameter name not in constructor Then: ValueError is raised listing valid parameter names
Test type: unit
- test_invalid_policy_cls_type_raises_type_error()[source]
Test that non-class policy_cls raises TypeError.
Purpose: Validates that policy_cls must be a class type
Given: An attempt to create HyperParamPlannerConfig with non-class policy_cls When: The config is instantiated with a string instead of a Policy class Then: TypeError is raised with descriptive message
Test type: unit
- test_valid_config_with_all_parameters_succeeds()[source]
Test that valid configuration with all correct parameters succeeds.
Purpose: Validates that properly configured HyperParamPlannerConfig is created without errors
Given: A policy class with specific constructor parameters When: HyperParamPlannerConfig is created with valid hyperparameters and constants Then: Config is created successfully and has valid config_id
Test type: unit
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestHyperParameterRunParamsIdUniqueness[source]
Bases:
objectTest uniqueness of HyperParameterRunParams.config_id property.
- test_config_id_property_exists()[source]
Test that HyperParameterRunParams has a config_id property.
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestIdConsistency[source]
Bases:
objectTest consistency of ID generation across different instances.
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestNumericalHyperParameterIdUniqueness[source]
Bases:
objectTest uniqueness of NumericalHyperParameter.id() method.
- test_different_high_values_produce_different_ids()[source]
Test that hyperparameters with different high values produce different IDs.
- test_different_low_values_produce_different_ids()[source]
Test that hyperparameters with different low values produce different IDs.
- test_different_names_produce_different_ids()[source]
Test that hyperparameters with different names produce different IDs.
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestOptimizedPolicyResultValidation[source]
Bases:
objectTest validation of OptimizedPolicyResult inputs.
- test_empty_chosen_hyper_parameters_raises_value_error()[source]
Test that empty chosen_hyper_parameters dict raises ValueError.
Purpose: Validates that chosen_hyper_parameters cannot be empty
Given: Valid environment and policy When: OptimizedPolicyResult is created with empty chosen_hyper_parameters dict Then: ValueError is raised indicating dict cannot be empty
Test type: unit
- test_empty_parameters_to_optimize_raises_value_error()[source]
Test that empty parameters_to_optimize list raises ValueError.
Purpose: Validates that parameters_to_optimize cannot be empty
Given: Valid environment and policy When: OptimizedPolicyResult is created with empty parameters_to_optimize list Then: ValueError is raised indicating list cannot be empty
Test type: unit
- test_extra_metric_in_optimized_metric_values_raises_value_error()[source]
Test that extra metrics in optimized_metric_values raise ValueError.
Purpose: Validates that optimized_metric_values only contains metrics from parameters_to_optimize
Given: Valid environment and policy When: OptimizedPolicyResult is created with extra metric in optimized_metric_values Then: ValueError is raised indicating extra metric
Test type: unit
- test_frozen_dataclass_is_immutable()[source]
Test that OptimizedPolicyResult is immutable (frozen).
Purpose: Validates that OptimizedPolicyResult is a frozen dataclass
Given: A created OptimizedPolicyResult instance When: Attempting to modify any attribute Then: FrozenInstanceError or AttributeError is raised
Test type: unit
- test_invalid_chosen_hyper_parameters_type_raises_type_error()[source]
Test that non-dict chosen_hyper_parameters raises TypeError.
Purpose: Validates that chosen_hyper_parameters must be a dict
Given: Valid environment and policy When: OptimizedPolicyResult is created with non-dict chosen_hyper_parameters Then: TypeError is raised with descriptive message
Test type: unit
- test_invalid_direction_type_in_parameters_to_optimize_raises_type_error()[source]
Test that non-HyperParameterOptimizationDirection directions raise TypeError.
Purpose: Validates that directions in parameters_to_optimize must be HyperParameterOptimizationDirection
Given: Valid environment and policy When: OptimizedPolicyResult is created with invalid direction type Then: TypeError is raised indicating direction must be HyperParameterOptimizationDirection
Test type: unit
- test_invalid_environment_type_raises_type_error()[source]
Test that non-Environment environment raises TypeError.
Purpose: Validates that environment must be an Environment instance
Given: An invalid environment type (not an Environment subclass) When: OptimizedPolicyResult is created with invalid environment Then: TypeError is raised with descriptive message
Test type: unit
- test_invalid_metric_name_raises_value_error()[source]
Test that invalid metric names in parameters_to_optimize raise ValueError.
Purpose: Validates that metric names must be valid for the environment-policy pair
Given: Valid environment and policy with specific available metrics When: OptimizedPolicyResult is created with invalid metric name Then: ValueError is raised listing available metrics
Test type: unit
- test_invalid_metric_name_type_in_parameters_to_optimize_raises_type_error()[source]
Test that non-string metric names in parameters_to_optimize raise TypeError.
Purpose: Validates that metric names in parameters_to_optimize must be strings
Given: Valid environment and policy When: OptimizedPolicyResult is created with non-string metric name Then: TypeError is raised indicating metric_name must be str
Test type: unit
- test_invalid_optimized_metric_values_type_raises_type_error()[source]
Test that non-dict optimized_metric_values raises TypeError.
Purpose: Validates that optimized_metric_values must be a dict
Given: Valid environment and policy When: OptimizedPolicyResult is created with non-dict optimized_metric_values Then: TypeError is raised with descriptive message
Test type: unit
- test_invalid_parameter_to_optimize_tuple_type_raises_type_error()[source]
Test that non-tuple elements in parameters_to_optimize raise TypeError.
Purpose: Validates that each element in parameters_to_optimize is a tuple of length 2
Given: Valid environment and policy When: OptimizedPolicyResult is created with non-tuple element in parameters_to_optimize Then: TypeError is raised indicating expected tuple of length 2
Test type: unit
- test_invalid_parameters_to_optimize_type_raises_type_error()[source]
Test that non-list parameters_to_optimize raises TypeError.
Purpose: Validates that parameters_to_optimize must be a list
Given: Valid environment and policy When: OptimizedPolicyResult is created with non-list parameters_to_optimize Then: TypeError is raised with descriptive message
Test type: unit
- test_invalid_policy_type_raises_type_error()[source]
Test that non-Policy policy raises TypeError.
Purpose: Validates that policy must be a Policy instance
Given: An invalid policy type (not a Policy subclass) When: OptimizedPolicyResult is created with invalid policy Then: TypeError is raised with descriptive message
Test type: unit
- test_missing_metric_in_optimized_metric_values_raises_value_error()[source]
Test that missing metrics in optimized_metric_values raise ValueError.
Purpose: Validates that all metrics in parameters_to_optimize must be in optimized_metric_values
Given: Valid environment and policy When: OptimizedPolicyResult is created with metric in parameters_to_optimize but not in optimized_metric_values Then: ValueError is raised indicating missing metric
Test type: unit
- test_multiple_metrics_validation_succeeds()[source]
Test that multiple valid metrics pass validation.
Purpose: Validates that OptimizedPolicyResult handles multiple optimization metrics correctly
Given: Valid environment and policy with multiple available metrics When: OptimizedPolicyResult is created with multiple metrics to optimize Then: Result is created successfully with all metrics
Test type: unit
- test_negative_num_episodes_raises_value_error()[source]
Test that negative num_episodes raises ValueError.
Purpose: Validates that num_episodes must be positive
Given: Valid environment and policy When: OptimizedPolicyResult is created with num_episodes <= 0 Then: ValueError is raised with descriptive message
Test type: unit
- test_negative_num_steps_raises_value_error()[source]
Test that negative num_steps raises ValueError.
Purpose: Validates that num_steps must be positive
Given: Valid environment and policy When: OptimizedPolicyResult is created with num_steps <= 0 Then: ValueError is raised with descriptive message
Test type: unit
- test_none_metric_values_are_allowed()[source]
Test that None values in optimized_metric_values are allowed.
Purpose: Validates that optimized_metric_values can contain None for missing metrics
Given: Valid environment and policy When: OptimizedPolicyResult is created with None as a metric value Then: Result is created successfully
Test type: unit
- test_valid_optimized_policy_result_succeeds()[source]
Test that valid OptimizedPolicyResult is created successfully.
Purpose: Validates that properly configured OptimizedPolicyResult is created without errors
Given: Valid environment, policy, and optimization parameters When: OptimizedPolicyResult is created with all correct parameters Then: Result is created successfully as a frozen dataclass
Test type: unit
- test_zero_num_episodes_raises_value_error()[source]
Test that zero num_episodes raises ValueError.
Purpose: Validates that num_episodes must be strictly positive
Given: Valid environment and policy When: OptimizedPolicyResult is created with num_episodes = 0 Then: ValueError is raised
Test type: unit
- class POMDPPlanners.tests.test_core.test_hyper_parameter_tuning.TestParallelizationLevelEnum[source]
Bases:
objectTests for the ParallelizationLevel enum.
- test_enum_from_value()[source]
Test ParallelizationLevel can be constructed from string values.
Purpose: Validates enum can be created from string values
Given: String values matching enum values When: Constructing enum instances from strings Then: Correct enum members are returned
Test type: unit
- test_enum_invalid_value_raises_error()[source]
Test that invalid string raises ValueError.
Purpose: Validates that invalid values are rejected
Given: An invalid string value When: Constructing a ParallelizationLevel from it Then: ValueError is raised
Test type: unit
POMDPPlanners.tests.test_core.test_policy module
POMDPPlanners.tests.test_core.test_serialization module
POMDPPlanners.tests.test_core.test_simulation module
Tests for simulation functionality.
This module tests the simulation functionality, focusing on: - Basic simulation operations - Episode simulation - History tracking - Metrics computation
- class POMDPPlanners.tests.test_core.test_simulation.MockDatabase[source]
Bases:
DataBaseInterface
- class POMDPPlanners.tests.test_core.test_simulation.MockSimulationTask(config_id, should_succeed=True)[source]
Bases:
SimulationTask
- class POMDPPlanners.tests.test_core.test_simulation.MockTaskManagerExternalDB(cache_db, cache_dir=None, logger_debug=False, use_queue_logger=False, console_output=True, no_logs=False)[source]
Bases:
TaskManagerExternalDB
- POMDPPlanners.tests.test_core.test_simulation.create_test_belief()[source]
Helper function to create a valid belief state for testing.
- POMDPPlanners.tests.test_core.test_simulation.test_history_equality()[source]
Test History class equality comparison.
Purpose: Validates equality comparison for history
Given: Objects with same or different configurations When: Equality comparison is performed Then: Objects are correctly identified as equal or unequal
Test type: unit
- POMDPPlanners.tests.test_core.test_simulation.test_history_serialization()[source]
Test History serialization and deserialization.
Purpose: Validates that History objects can be serialized to dictionaries and deserialized back to equivalent objects
Given: History object with StepData, timing attributes, and configuration parameters When: to_dict() and from_dict() methods are used for serialization and deserialization Then: Serialized dictionary contains all key fields and deserialized History equals original object
Test type: unit
- POMDPPlanners.tests.test_core.test_simulation.test_task_manager_external_db()[source]
Test TaskManagerExternalDB with successful and failed tasks.
Purpose: Validates that TaskManagerExternalDB correctly handles mixed success/failure scenarios and caching behavior
Given: MockDatabase, TestTaskManagerExternalDB, and 4 MockSimulationTasks (3 successful, 1 failed) with identifiers When: run_tasks is called with mixed success/failure tasks Then: Only successful tasks (3) are returned and cached, failed tasks are excluded, subsequent runs use cache
Test type: unit
- POMDPPlanners.tests.test_core.test_simulation.test_task_manager_external_db_all_cached()[source]
Test TaskManagerExternalDB when all tasks are cached.
Purpose: Validates that TaskManagerExternalDB correctly retrieves all results from cache when tasks are pre-cached
Given: MockDatabase with pre-cached results, TestTaskManagerExternalDB, and 2 MockSimulationTasks with cached entries When: run_tasks is called with tasks that have pre-existing cache entries Then: All cached results (2) are returned with correct identifiers without re-executing tasks
Test type: unit
- POMDPPlanners.tests.test_core.test_simulation.test_task_manager_external_db_all_failed()[source]
Test TaskManagerExternalDB when all tasks fail.
Purpose: Validates that TaskManagerExternalDB correctly handles edge case where all tasks fail
Given: MockDatabase, TestTaskManagerExternalDB, and 2 MockSimulationTasks that both fail (should_succeed=False) When: run_tasks is called with all failing tasks Then: Empty results and successful_ids lists are returned (no tasks succeed or get cached)
Test type: unit