5 This module is for formatting and writing unit-tests in python. The general format is as follows 6 1. Use start() to start a test and give it, as an argument, the name of the test 7 2. Use whatever check functions are relevant to test the run 8 3. Use finish() to signal the end of the test 9 4. Repeat stages 1-3 as the number of tests you want to run in the file 10 5. Use print_results_and_exit() to print the number of tests and assertions that passed/failed in the correct format 11 before exiting with 0 if all tests passed or with 1 if there was a failed test 13 In addition you may want to use the 'info' functions in this module to add more detailed 14 messages in case of a failed check 17 import
os, sys, subprocess, traceback, platform
n_failed_assertions = 0
test_in_progress = False
32 We want certain environment variables set when we get here. We assume they're not set. 34 However, it is impossible to change the current running environment to see them. Instead, we rerun ourselves 35 in a child process that inherits the environment we set. 37 To do this, we depend on a specific argument in sys.argv that tells us this is the rerun (meaning child 38 process). When we see it, we assume the variables are set and don't do anything else. 40 For this to work well, the environment variable requirement (set_env_vars call) should appear as one of the 41 first lines of the test. 43 :param env_vars: A dictionary where the keys are the name of the environment variable and the values are the 44 wanted values in string form (environment variables must be strings) 46 if
sys.argv[-1] != 'rerun'
log.d( 'environment variables needed:'
, env_vars )
env_var, val in
os.environ[env_var] = val
cmd = [sys.executable]
51 if 'site' not in
, cmd )
p = subprocess.run( cmd, stderr=subprocess.PIPE, universal_newlines=True
sys.exit( p.returncode )
log.d( 'rerun detected'
sys.argv = sys.argv[:-1]
68 :return: the first device that was found, if no device is found the test is skipped. That way we can still run 69 the unit-tests when no device is connected and not fail the tests that check a connected device 71 import
73 if not
("No device found, skipping test"
83 :param product_line: The product line of the wanted devices 84 :return: A list of devices of specific product line that was found, if no device is found the test is skipped. 85 That way we can still run the unit-tests when no device is connected 86 and not fail the tests that check a connected device 88 import
devices_list = c.query_devices(product_line)
devices_list.size() == 0:
( "No device of the"
, product_line, "product line was found; skipping test"
, devices_list.size(), product_line, 'devices:'
, [dev for
100 Function for printing the current call stack. Used when an assertion fails 102 print
( 'Traceback (most recent call last):'
stack = traceback.format_stack()
reversed( stack ):
( line, end = ''
115 The following functions are for asserting test cases: 116 The check family of functions tests an expression and continues the test whether the assertion succeeded or failed. 117 The require family are equivalent but execution is aborted if the assertion fails. In this module, the require family 118 is used by sending abort=True to check functions 124 Function for when a check fails 126 global
n_failed_assertions += 1
log.e( "Aborting test"
137 def check
(exp, abort_if_failed = False):
139 Basic function for asserting expressions. 140 :param exp: An expression to be asserted, if false the assertion failed 141 :param abort_if_failed: If True and assertion failed the test will be aborted 142 :return: True if assertion passed, False otherwise 148 print
( " check failed; received"
, exp )
159 Used for asserting a variable has the expected value 160 :param result: The actual value of a variable 161 :param expected: The expected value of the variable 162 :param abort_if_failed: If True and assertion failed the test will be aborted 163 :return: True if assertion passed, False otherwise 165 if
type(expected) == list:
("check_equal should not be used for lists. Use check_equal_lists instead"
result != expected:
( " result :"
, result )
( " expected:"
, expected )
186 Used to assert that a certain section of code (exp: an if block) is not reached 187 :param abort_if_failed: If True and this function is reached the test will be aborted 189 check
194 Used to assert that an except block is not reached. It's different from unreachable because it expects 195 to be in an except block and prints the stack of the error and not the call-stack for this function 199
traceback.print_exc( file = sys.stdout )
205 Used to assert that 2 lists are identical. python "equality" (using ==) requires same length & elements 206 but not necessarily same ordering. Here we require exactly the same, including ordering. 207 :param result: The actual list 208 :param expected: The expected list 209 :param abort_if_failed: If True and assertion failed the test will be aborted 210 :return: True if assertion passed, False otherwise 215 if
len(result) != len(expected):
("Check equal lists failed due to lists of different sizes:"
("The resulted list has"
, len(result), "elements, but the expected list has"
, len(expected), "elements"
res, exp in
("Check equal lists failed due to unequal elements:"
("The element of index"
, i, "in both lists was not equal"
( " result list :"
, result )
( " expected list:"
, expected )
238 def check_exception
(exception, expected_type, expected_msg = None, abort_if_failed = False):
240 Used to assert a certain type of exception was raised, placed in the except block 241 :param exception: The exception that was raised 242 :param expected_type: The expected type of exception 243 :param expected_msg: The expected message in the exception 244 :param abort_if_failed: If True and assertion failed the test will be aborted 245 :return: True if assertion passed, False otherwise 248 if
type(exception) != expected_type:
failed = [ " raised exception was of type"
, type(exception), "\n but expected type"
, expected_type ]
expected_msg and str
(exception) != expected_msg:
failed = [ " exception message:"
(exception), "\n but we expected:"
, expected_msg ]
265 Used for checking frame drops while streaming 266 :param frame: Current frame being checked 267 :param previous_frame_number: Number of the previous frame 268 :param allowed_drops: Maximum number of frame drops we accept 269 :return: False if dropped too many frames or frames were out of order, True otherwise 271 global
272 if not
frame_number = frame.get_frame_number()
previous_frame_number > 0:
dropped_frames = frame_number - (previous_frame_number + 1)
dropped_frames > allowed_drops:
( dropped_frames, "frame(s) starting from frame"
, previous_frame_number + 1, "were dropped"
dropped_frames < 0:
( "Frames repeated or out of order. Got frame"
, frame_number, "after frame"
294 Class representing the information stored in test_info dictionary 301 def info
( name, value, persistent = False ):
303 This function is used to store additional information to print in case of a failed test. This information is 304 erased after the next check. The information is stored in the dictionary test_info, Keys are names (strings) 305 and the items are of Information class 306 If information with the given name is already stored it will be replaced 307 :param name: The name of the variable 308 :param value: The value this variable stores 309 :param persistent: If this parameter is True, the information stored will be kept after the following check 310 and will only be erased at the end of the test ( or when reset_info is called with True) 318 erases the stored information 319 :param persistent: If this parameter is True, even the persistent information will be erased 325 for
name, information in
326 if not
name, information in
, name, " value:"
342 Function for manually failing a test in case you want a specific test that does not fit any check function 351 global
test_in_progress != in_progress:
RuntimeError( "test case is already running"
RuntimeError( "no test case is running"
361 Used at the beginning of each test to reset the global variables 362 :param test_name: Any number of arguments that combined give the name of this test 365 global
n_tests, test_failed, test_in_progress
test_in_progress = True
375 Used at the end of each test to check if it passed and print the answer 378 global
test_failed, n_failed_tests, test_in_progress
test_in_progress = False
389 For use only in-between test-cases, this will separate them in some visual way so as 390 to be easier to differentiate. 401 Used to print the results of the tests in the file. The format has to agree with the expected format in check_log() 402 in run-unit-tests and with the C++ format using Catch 405 global
n_assertions, n_tests, n_failed_assertions, n_failed_tests
passed = n_assertions - n_failed_assertions
, n_tests, "|"
, n_failed_tests, "failed"
, n_assertions, "|"
, passed, "passed |"
, n_failed_assertions, "failed"
("All tests passed ("
(n_assertions) + " assertions in "
(n_tests) + " test cases)"
def check_frame_drops(frame, previous_frame_number, allowed_drops=1)
def check_exception(exception, expected_type, expected_msg=None, abort_if_failed=False)
def info(name, value, persistent=False)
def check(exp, abort_if_failed=False)
def check_equal_lists(result, expected, abort_if_failed=False)
static std::string print(const transformation &tf)
def check_equal(result, expected, abort_if_failed=False)
def __init__(self, value, persistent=False)