MultiAgentDecisionProcess
|
Globals contains several definitions global to the MADP toolbox. More...
Typedefs | |
typedef unsigned int | Index |
A general index. More... | |
typedef unsigned long long int | LIndex |
A long long index. More... | |
Enumerations | |
enum | reward_t { REWARD, COST } |
Inherited from Tony's POMDP file format. More... | |
Functions | |
double | CastLIndexToDouble (LIndex i) |
Index | CastLIndexToIndex (LIndex i) |
bool | EqualProbability (double p1, double p2) |
bool | EqualReward (double r1, double r2) |
Variables | |
const size_t | ALL_SOLUTIONS =0 |
constant to denote all solutions (e.g., nrDesiredSolutions = ALL_SOLUTIONS ) More... | |
const Index | INITIAL_JAOHI =0 |
The initial (=empty) joint action-observation history index. More... | |
const Index | INITIAL_JOHI =0 |
The initial (=empty) joint observation history index. More... | |
const unsigned int | MAXHORIZON =999999 |
The highest horizon we will consider. More... | |
const double | PROB_PRECISION =1e-12 |
The precision for probabilities. More... | |
const double | REWARD_PRECISION =1e-12 |
Used to determine when two (immediate) rewards are considered equal. More... | |
Globals contains several definitions global to the MADP toolbox.
typedef unsigned int Globals::Index |
A general index.
typedef unsigned long long int Globals::LIndex |
A long long index.
enum Globals::reward_t |
double Globals::CastLIndexToDouble | ( | LIndex | i | ) |
Referenced by ActionObservationHistory::ActionObservationHistory(), BGIPSolution::AddSolution(), MonahanBGPlanner::BackProjectMonahanBG(), BayesianGameWithClusterInfo::BayesianGameWithClusterInfo(), AlphaVectorBG::BeliefBackupExhaustiveStoreAll(), QPOMDP::ComputeRecursively(), QBG::ComputeRecursively(), QHybrid::ComputeRecursively(), QBG::ComputeRecursivelyNoCache(), BG_FactorGraphCreator::Construct_AgentBGPolicy_Variables(), BG_FactorGraphCreator::Construct_LocalPayoff_Factors(), GMAA_MAAstarClassic::ConstructAndValuateNextPolicies(), GMAA_kGMAA::ConstructAndValuateNextPolicies(), GMAA_MAAstar::ConstructAndValuateNextPolicies(), PlanningUnitMADPDiscrete::CreateActionHistoryTree(), PlanningUnitMADPDiscrete::CreateActionObservationHistoryTree(), PlanningUnitMADPDiscrete::CreateObservationHistoryTree(), BGCG_SolverNonserialDynamicProgramming::EliminateAgent(), BayesianGameWithClusterInfo::Extend(), BGforStageCreation::Fill_FirstOHtsI(), BayesianGameForDecPOMDPStage::Fill_FirstOHtsI(), GMAA_MAA_ELSI::Fill_FirstOHtsI(), GMAA_MAA_ELSI::Fill_jaI_Array(), PlanningUnitMADPDiscrete::GetActionHistoryArrays(), PlanningUnitMADPDiscrete::GetActionObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetActionObservationHistoryIndex(), IndividualBeliefJESP::GetAugmentedStateIndex(), PlanningUnitMADPDiscrete::GetJAOHProbGivenPred(), PlanningUnitMADPDiscrete::GetJAOHProbs(), PlanningUnitMADPDiscrete::GetJointActionHistoryIndex(), JPolComponent_VectorImplementation::GetJointActionIndex(), PlanningUnitMADPDiscrete::GetJointActionObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetJointActionObservationHistoryIndex(), PlanningUnitMADPDiscrete::GetJointActionObservationHistoryTree(), PlanningUnitMADPDiscrete::GetJointBeliefInterface(), PlanningUnitMADPDiscrete::GetJointObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetJointObservationHistoryIndex(), BGCG_SolverNonserialDynamicProgramming::GetJpolIndexForBestResponses(), PlanningUnitMADPDiscrete::GetNrPolicyDomainElements(), PlanningUnitMADPDiscrete::GetObservationHistoryArrays(), PlanningUnitMADPDiscrete::GetObservationHistoryIndex(), IndividualBeliefJESP::GetOthersObservationHistIndex(), PlanningUnitMADPDiscrete::GetSuccessorAHI(), PlanningUnitMADPDiscrete::GetSuccessorAOHI(), PlanningUnitMADPDiscrete::GetSuccessorJAHI(), PlanningUnitMADPDiscrete::GetSuccessorJOHI(), PlanningUnitMADPDiscrete::GetSuccessorOHI(), PlanningUnitMADPDiscrete::GetTimeStepForAOHI(), AlphaVectorPlanning::ImportValueFunction(), PlanningUnitMADPDiscrete::InitializeJointActionObservationHistories(), PlanningUnitMADPDiscrete::InitializeJointObservationHistories(), JointObservationHistory::JointObservationHistory(), PlanningUnitMADPDiscrete::JointToIndividualActionObservationHistoryIndicesRef(), LocalBGValueFunctionVector::LocalBGValueFunctionVector(), BayesianGameForDecPOMDPStage::ProbRewardForjoahI(), GMAA_MAA_ELSI::ProbRewardForjoahI(), PlanningUnitMADPDiscrete::RegisterJointActionObservationHistoryTree(), PolicyPureVector::SetIndex(), and FSAOHDist_NECOF::Update().
bool Globals::EqualProbability | ( | double | p1, |
double | p2 | ||
) |
References PROB_PRECISION.
Referenced by MultiAgentDecisionProcessDiscreteFactoredStates::CacheFlatObservationModel(), MultiAgentDecisionProcessDiscreteFactoredStates::CacheFlatTransitionModel(), DecPOMDPDiscrete::CompareModels(), Problem_CGBG_FF::ComputeLocalUtility(), BG_FactorGraphCreator::Construct_JointType_Factors(), BG_FactorGraphCreator::Construct_LocalJointType_Factors(), BayesianGameWithClusterInfo::ConstructClusteredIndividualTypes(), MultiAgentDecisionProcessDiscreteFactoredStates::MarginalizeTransitionObservationModel(), CPT::SanityCheck(), FSDist_COF::SanityCheck(), FSAOHDist_NECOF::SanityCheck(), BayesianGameBase::SanityCheckBGBase(), BayesianGameWithClusterInfo::TestApproximateEquivalence(), and BayesianGameWithClusterInfo::TestExactEquivalence().
bool Globals::EqualReward | ( | double | r1, |
double | r2 | ||
) |
const size_t Globals::ALL_SOLUTIONS =0 |
constant to denote all solutions (e.g., nrDesiredSolutions = ALL_SOLUTIONS )
const Index Globals::INITIAL_JAOHI =0 |
The initial (=empty) joint action-observation history index.
Referenced by QFunctionJAOHTree::ComputeQ(), QHybrid::ComputeQ(), and PlanningUnitMADPDiscrete::GetJAOHProbs().
const Index Globals::INITIAL_JOHI =0 |
The initial (=empty) joint observation history index.
Referenced by ValueFunctionDecPOMDPDiscrete::CalculateV(), SimulationDecPOMDPDiscrete::RunSimulation(), and SimulationDecPOMDPDiscrete::RunSimulationClusteredBG().
const unsigned int Globals::MAXHORIZON =999999 |
The highest horizon we will consider.
When the horizon of a problem is set to this value, we consider it an infinite-horizon problem.
Referenced by BeliefSetNonStationary::BeliefSetNonStationary(), Perseus::CheckConvergence(), JPolComponent_VectorImplementation::Construct(), PlanningUnitMADPDiscrete::Deinitialize(), Perseus::GetInitialValueFunction(), BayesianGameBase::GetNrPolicyDomainElements(), PlanningUnitMADPDiscrete::GetNrPolicyDomainElements(), PlanningUnitMADPDiscrete::GetTimeStepForJAOHI(), MDPValueIteration::Initialize(), MDPPolicyIteration::Initialize(), MDPPolicyIterationGPU::Initialize(), PlanningUnitMADPDiscrete::Initialize(), SimulationDecPOMDPDiscrete::Initialize(), MDPSolver::Print(), AlphaVectorPlanning::SampleBeliefs(), AlphaVectorPlanning::SampleBeliefsNonStationary(), PlanningUnitDecPOMDPDiscrete::SanityCheck(), PartialJointPolicyPureVector::SetDepth(), JointPolicyPureVector::SetDepth(), and ArgumentHandlers::solutionMethodOptions_parse_argument().
const double Globals::PROB_PRECISION =1e-12 |
The precision for probabilities.
Used to determine when two probabilities are considered equal, for instance when converting full beliefs to sparse beliefs.
Referenced by QPOMDP::ComputeRecursively(), QBG::ComputeRecursively(), QHybrid::ComputeRecursively(), BayesianGameWithClusterInfo::ConstructClusteredIndividualTypes(), EqualProbability(), AlphaVectorPruning::FindBeliefAccelerated(), AlphaVectorPlanning::GetDuplicateIndices(), MultiAgentDecisionProcessDiscreteFactoredStates::Initialize2DBN(), FactoredDecPOMDPDiscrete::MarginalizeISD(), MultiAgentDecisionProcessDiscreteFactoredStates::MarginalizeTransitionObservationModel(), BG_FactorGraphCreator::PerturbationTerm(), BGIP_SolverBranchAndBound< JP >::ReSolve(), PlanningUnitDecPOMDPDiscrete::SanityCheck(), MultiAgentDecisionProcessDiscrete::SanityCheck(), Belief::SanityCheck(), BeliefSparse::SanityCheck(), MultiAgentDecisionProcessDiscreteFactoredStates::SanityCheckObservations(), MultiAgentDecisionProcessDiscreteFactoredStates::SanityCheckTransitions(), ObservationModelMappingSparse::Set(), TransitionModelMappingSparse::Set(), EventObservationModelMappingSparse::Set(), MADPComponentDiscreteStates::SetInitialized(), BGIP_SolverBruteForceSearch< JP >::Solve(), BGIP_SolverBFSNonIncremental< JP >::Solve(), JointBeliefSparse::Update(), and JointBeliefSparse::UpdateSlow().
const double Globals::REWARD_PRECISION =1e-12 |
Used to determine when two (immediate) rewards are considered equal.
Referenced by PerseusConstrainedPOMDPPlanner::BackupStage(), FactoredDecPOMDPDiscrete::CacheFlatRewardModel(), FactoredDecPOMDPDiscrete::ClipRewardModel(), EqualReward(), AlphaVectorPruning::FindBeliefAccelerated(), RewardModelMappingSparse::Set(), RewardModelMappingSparseMapped::Set(), and AlphaVectorPlanning::VectorIsInValueFunction().