Classes | |
class | Problem |
Functions | |
def | buildNameIdentifierDict |
def | checkPlans |
def | evalDir |
def | evaluateProblem |
def | main |
def | parseResults |
def | readRefDataFromFile |
def | writeEvalData |
def | writeTex |
def | writeTexTable |
def | writeTexTableEntry |
def | writeTexTableLine |
def eval.buildNameIdentifierDict | ( | evaldict, | |
name_entries_dict, | |||
curIdent | |||
) |
def eval.checkPlans | ( | evaldict, | |
refDict | |||
) |
def eval.evalDir | ( | path, | |
files, | |||
files_are_condensed | |||
) |
Evaluate the directory in path that contains a number of plan...pddl.best files. Returns a dict mapping from path to a list of Problems If files_are_condensed is False the files list is a list of directories that contain one problem each with plan.soln.XXX files.
def eval.evaluateProblem | ( | problem, | |
referenceProblem, | |||
target | |||
) |
def eval.parseResults | ( | eval_dir | ) |
def eval.readRefDataFromFile | ( | ref_file | ) |
Read reference data from a file and return a problem dict Containing {Domain: problem_list} Format from linewise ipc2008 data: tempo-sat temporal-fast-downward transport-numeric 12 OK 982 433 0.440936863544 track planner domain problem# solved? planner-quality reference-quality score=reference/planner-quality We are interested in: tempo-sat track, any planner (don't care), any domain, any problem -> read the reference-quality
def eval.writeEvalData | ( | evaldict, | |
path | |||
) |
def eval.writeTex | ( | evaldict, | |
filename, | |||
refDict | |||
) |
def eval.writeTexTable | ( | nameEntriesDict, | |
f, | |||
target, | |||
better, | |||
refEntriesDict | |||
) |
Write latex table for this dict. Creates a table that has one row per problem. There is one column per Setting/Version. Target gives the target property of a problem to write in a column. If comparing the targets with better and item is equal to the best, the entry is marked bold.
def eval.writeTexTableEntry | ( | f, | |
problem, | |||
referenceProblem, | |||
target, | |||
best, | |||
num, | |||
sums | |||
) |
Write one entry for an output table referring to the target property of problem. If referenceProblem is given the problem is compared to the referenceProblem and relative values are printed. Comparison in done with respect to best. In that case problem might be None if there was no result/data. Sums is a dictionary of num -> accumulated sum that should be updated.
def eval.writeTexTableLine | ( | f, | |
problem_id, | |||
runs, | |||
refVals, | |||
target, | |||
better, | |||
numEntries, | |||
sums | |||
) |
Write one line for problem_id in the output table referring to the target property of the problem. If refVals is given the problem is compared to the reference and relative values are printed. Comparison in done with the better property of the division of the properties. In that case runs might not contain a problem for problem_id if there was no result/data. numEntries is only used to decide when the line ends. Sums is a dictionary of run_num -> accumulated sum