Public Member Functions | Protected Types | Protected Member Functions | Private Attributes
SavedPolicy Class Reference

#include <SavedPolicy.hh>

Inheritance diagram for SavedPolicy:
Inheritance graph
[legend]

List of all members.

Public Member Functions

virtual int first_action (const std::vector< float > &s)
virtual void last_action (float r)
void loadPolicy (const char *filename)
virtual int next_action (float r, const std::vector< float > &s)
 SavedPolicy (int numactions, const char *filename)
virtual void seedExp (std::vector< experience >)
virtual void setDebug (bool d)
virtual ~SavedPolicy ()

Protected Types

typedef const std::vector
< float > * 
state_t

Protected Member Functions

state_t canonicalize (const std::vector< float > &s)
void printState (const std::vector< float > &s)

Private Attributes

bool ACTDEBUG
bool LOADDEBUG
bool loaded
const int numactions
std::map< state_t, std::vector
< float > > 
Q
std::set< std::vector< float > > statespace

Detailed Description

Agent that uses a saved policy from a file.

Definition at line 16 of file SavedPolicy.hh.


Member Typedef Documentation

typedef const std::vector<float>* SavedPolicy::state_t [protected]

The implementation maps all sensations to a set of canonical pointers, which serve as the internal representation of environment state.

Definition at line 37 of file SavedPolicy.hh.


Constructor & Destructor Documentation

SavedPolicy::SavedPolicy ( int  numactions,
const char *  filename 
)

Standard constructor

Parameters:
numactionsThe number of possible actions

Definition at line 4 of file SavedPolicy.cc.

Definition at line 17 of file SavedPolicy.cc.


Member Function Documentation

SavedPolicy::state_t SavedPolicy::canonicalize ( const std::vector< float > &  s) [protected]

Produces a canonical representation of the given sensation.

Parameters:
sThe current sensation from the environment.
Returns:
A pointer to an equivalent state in statespace.

Definition at line 86 of file SavedPolicy.cc.

int SavedPolicy::first_action ( const std::vector< float > &  s) [virtual]

Implements Agent.

Definition at line 19 of file SavedPolicy.cc.

void SavedPolicy::last_action ( float  r) [virtual]

Implements Agent.

Definition at line 78 of file SavedPolicy.cc.

void SavedPolicy::loadPolicy ( const char *  filename)

Definition at line 119 of file SavedPolicy.cc.

int SavedPolicy::next_action ( float  r,
const std::vector< float > &  s 
) [virtual]

Implements Agent.

Definition at line 48 of file SavedPolicy.cc.

void SavedPolicy::printState ( const std::vector< float > &  s) [protected]

Definition at line 106 of file SavedPolicy.cc.

void SavedPolicy::seedExp ( std::vector< experience seeds) [virtual]

Reimplemented from Agent.

Definition at line 114 of file SavedPolicy.cc.

virtual void SavedPolicy::setDebug ( bool  d) [inline, virtual]

Implements Agent.

Definition at line 28 of file SavedPolicy.hh.


Member Data Documentation

Definition at line 60 of file SavedPolicy.hh.

Definition at line 61 of file SavedPolicy.hh.

Definition at line 62 of file SavedPolicy.hh.

const int SavedPolicy::numactions [private]

Definition at line 58 of file SavedPolicy.hh.

std::map<state_t, std::vector<float> > SavedPolicy::Q [private]

The primary data structure of the learning algorithm, the value function Q. For state_t s and int a, Q[s][a] gives the learned maximum expected future discounted reward conditional on executing action a in state s.

Definition at line 56 of file SavedPolicy.hh.

std::set<std::vector<float> > SavedPolicy::statespace [private]

Set of all distinct sensations seen. Pointers to elements of this set serve as the internal representation of the environment state.

Definition at line 50 of file SavedPolicy.hh.


The documentation for this class was generated from the following files:


rl_agent
Author(s): Todd Hester
autogenerated on Thu Jun 6 2019 22:00:14