Classes | Functions
eval Namespace Reference

Classes

class  Problem

Functions

def buildNameIdentifierDict
def checkPlans
def evalDir
def evaluateProblem
def main
def parseResults
def readRefDataFromFile
def writeEvalData
def writeTex
def writeTexTable
def writeTexTableEntry
def writeTexTableLine

Function Documentation

def eval.buildNameIdentifierDict (   evaldict,
  name_entries_dict,
  curIdent 
)
From a dictionary like {K: {A: {X:1 Y:2}, B: {X:1 Z:3}}}
    build a dictionary: {X: {"K/A":1 "K/B": 1} Y: {"K/A": 2} Z: {"K/B" : 3}} 

Definition at line 214 of file eval.py.

def eval.checkPlans (   evaldict,
  refDict 
)
Check if plans in evaldict are equal to those in refDict
    

Definition at line 452 of file eval.py.

def eval.evalDir (   path,
  files,
  files_are_condensed 
)
Evaluate the directory in path that contains a number of
    plan...pddl.best files.
    Returns a dict mapping from path to a list of Problems 
    
    If files_are_condensed is False the files list is a list of
    directories that contain one problem each with plan.soln.XXX files.
    

Definition at line 130 of file eval.py.

def eval.evaluateProblem (   problem,
  referenceProblem,
  target 
)
Evaluate problem's target property.
    If a referenceProblem is given it is evaluated with respect to that.
    In that case problem might be None if there was no data and None is returned. 

Definition at line 255 of file eval.py.

def eval.main ( )

Definition at line 512 of file eval.py.

def eval.parseResults (   eval_dir)
Parse dir and all subdirectories to create results. 
    Returns a dictionary of the directory structure relative
    to eval_dir with the evaluated problems 

Definition at line 178 of file eval.py.

def eval.readRefDataFromFile (   ref_file)
Read reference data from a file and return a problem dict
    Containing {Domain: problem_list}
    Format from linewise ipc2008 data:
    tempo-sat  temporal-fast-downward  transport-numeric  12        OK       982                433                 0.440936863544
    track       planner                 domain          problem#  solved?   planner-quality reference-quality       score=reference/planner-quality
    We are interested in: tempo-sat track, any planner (don't care), any domain, any problem -> read the reference-quality

Definition at line 90 of file eval.py.

def eval.writeEvalData (   evaldict,
  path 
)
write eval.dat with evaluation for all problems
    in each domain directory 

Definition at line 201 of file eval.py.

def eval.writeTex (   evaldict,
  filename,
  refDict 
)
Write latex file for this dict.
    For dict: {X: {A: problems, B: problems}, Y: {A: problems, C: problems}}
    the output should be a table for A, B, C, where A has cols X/Y, B col X, C col Y
    

Definition at line 225 of file eval.py.

def eval.writeTexTable (   nameEntriesDict,
  f,
  target,
  better,
  refEntriesDict 
)
Write latex table for this dict.
    Creates a table that has one row per problem.
    There is one column per Setting/Version.
    Target gives the target property of a problem to write in a column.
    If comparing the targets with better and item is equal to
    the best, the entry is marked bold.
    

Definition at line 354 of file eval.py.

def eval.writeTexTableEntry (   f,
  problem,
  referenceProblem,
  target,
  best,
  num,
  sums 
)
Write one entry for an output table referring to the target property of problem.
    If referenceProblem is given the problem is compared to the referenceProblem
    and relative values are printed.
    Comparison in done with respect to best.
    In that case problem might be None if there was no result/data.
    Sums is a dictionary of num -> accumulated sum that should be updated. 

Definition at line 278 of file eval.py.

def eval.writeTexTableLine (   f,
  problem_id,
  runs,
  refVals,
  target,
  better,
  numEntries,
  sums 
)
Write one line for problem_id in the output table referring to the target property of the problem.
    If refVals is given the problem is compared to the reference
    and relative values are printed.
    Comparison in done with the better property of the division of the properties.
    In that case runs might not contain a problem for problem_id if there was no result/data.
    numEntries is only used to decide when the line ends.
    Sums is a dictionary of run_num -> accumulated sum 

Definition at line 302 of file eval.py.



planner_benchmarks
Author(s): Multiple
autogenerated on Mon Oct 6 2014 07:51:52