Internal API reference
These modules are used internally for the test running process.
grader.asset_management module
-
class grader.asset_management.AssetFolder(tester_path, solution_path, other_files=, []is_code=False, add_to_path=True)[source]
Bases: builtins.object
-
files_in_path()[source]
-
remove()[source]
-
write_asset(asset_info)[source]
-
grader.asset_management.tempModule(code, working_dir=None, encoding='utf8')[source]
grader.code_runner module
-
grader.code_runner.call_command(cmd, timeout=inf, cwd=None, decode=True, **subproc_options)[source]
-
grader.code_runner.call_sandbox(sandbox_cmd, tester_path, solution_path)[source]
-
grader.code_runner.call_test(test_index, tester_path, solution_path, options)[source]
-
grader.code_runner.microseconds_passed(time_delta)[source]
-
grader.code_runner.read_proc_results(proc, decode)[source]
grader.datastructures module
-
class grader.datastructures.OrderedTestcases[source]
Bases: builtins.object
Class that acts like a ordered dictionary, with removal and reset
-
add(name, value)[source]
-
clear()[source]
-
get_name(index)[source]
-
indexOf(name)[source]
-
load_from(module_path)[source]
-
remove(name)[source]
-
rename(old_name, new_name)[source]
-
values()[source]
grader.execution_base module
This module handles the execution of the users module. It should ideally
be called in an subprocess (like code_runner does) in a secure enviroment
with all code files prepared.
This overhead is needed to avoid having extra testcases loaded by the grader.
test_module loads the tester code loaded in a file. In that For each test, an
async request is fired (run in another process). It is resolved within the
resolve_testcase_run function. If that call timeouts, it is then terminated.
See resolve_testcase_run for output format description.
-
grader.execution_base.call_all(function_list, *args, **kwargs)[source]
-
grader.execution_base.call_test_function(test_index, tester_path, solution_path)[source]
Called in another process. Finds the test test_name, calls the
pre-test hooks and tries to execute it.
If an exception was raised by call, prints it to stdout
-
grader.execution_base.do_testcase_run(test_name, tester_path, solution_path, options)[source]
Calls the test, checking if it doesn’t raise an Exception.
Returns a dictionary in the following form:
{
"success": boolean,
"traceback": string ("" if None)
"error_message: string
"time": string (execution time, rounded to 3 decimal digits)
"description": string (test name/its description)
}
If the test timeouts, traceback is set to “timeout”.
Post-hooks can manipulate with the test results before returning.
grader.program_container module
-
class grader.program_container.ProgramContainer(module_path, results)[source]
Bases: threading.Thread
The thread in which the users program runs
-
log(what)[source]
-
classmethod restore_io()[source]
-
run()[source]
grader.utils module
An utility module containing utility functions used by the grader module
and some useful pre-test hooks.
-
grader.utils.beautifyDescription(description)[source]
Converts docstring of a function to a test description
by removing excess whitespace and joining the answer on one
line
-
grader.utils.dump_json(ordered_dict)[source]
Dumps the dict to a string, indented
-
grader.utils.get_error_message(exception)[source]
-
grader.utils.get_traceback(exception)[source]
-
grader.utils.import_module(path, name=None)[source]
-
grader.utils.is_function(value)[source]
-
grader.utils.load_json(json_string)[source]
Loads json_string into an dict
-
grader.utils.read_code(path)[source]
-
grader.utils.setDescription(function, description)[source]