temci.run package

Submodules

temci.run.cpuset module

temci.run.cpuset.BENCH_SET = 'temci.set'

Name of the base cpu set used by temci for benchmarking purposes

temci.run.cpuset.CONTROLLER_SUB_BENCH_SET = 'temci.set.controller'

Name of the cpu set used by the temci control process

temci.run.cpuset.CPUSET_DIR = '/cpuset'

Location that the cpu set pseudo file system is mounted at

class temci.run.cpuset.CPUSet(active: bool = True, base_core_number: Optional[int] = None, parallel: Optional[int] = None, sub_core_number: Optional[int] = None, temci_in_base_set: bool = True)[source]

Bases: object

This class allows the usage of cpusets (see man cpuset) and therefore requires root privileges. It uses the program cset to modify the cpusets. This class needs root privileges to operate properly. Warns if not.

Initializes the cpu sets an determines the number of parallel programs (parallel_number variable).

Parameters
  • active – are cpu sets actually used?

  • base_core_number – number of cpu cores for the base (remaining part of the) system

  • parallel – 0: benchmark sequential, > 0: benchmark parallel with n instances, -1: determine n automatically

  • sub_core_number – number of cpu cores per parallel running program

  • temci_in_base_set – place temci in the same cpu set as the rest of the system?

Raises
  • ValueError – if the passed parameters don’t work together on the current platform

  • EnvironmentError – if the environment can’t be setup properly (e.g. no root privileges)

active

Are cpu sets actually used?

av_cores

Number of available cpu cores

base_core_number

Number of cpu cores for the base (remaining part of the) system

get_sub_set(set_id: int) str[source]

Gets the name of the benchmarking cpu set with the given id / number (starting at zero).

move_process_to_set(pid: int, set_id: int)[source]

Moves the process with the passed id to the parallel sub cpuset with the passed id.

Parameters
  • pid – passed process id

  • set_id – passed parallel sub cpuset id

parallel

0: benchmark sequential, > 0: benchmark parallel with n instances, -1: determine n automatically

parallel_number

Number of used parallel instances, zero if the benchmarking is done sequentially

sub_core_number

Number of cpu cores per parallel running program

teardown()[source]

Tears the created cpusets down and makes the system usable again.

temci_in_base_set

Place temci in the same cpu set as the rest of the system?

temci.run.cpuset.NEW_ROOT_SET = 'bench.root'

Name of the new root cpu set that contains most of the processes of the original root set

temci.run.cpuset.SUB_BENCH_SET = 'temci.set.{}'

Format of cpu sub set names for benchmarking

temci.run.run_driver module

This modules contains the base run driver, needed helper classes and registries.

class temci.run.run_driver.AbstractRunDriver(misc_settings: Optional[dict] = None)[source]

Bases: AbstractRegistry

A run driver that does the actual benchmarking and supports plugins to modify the benchmarking environment.

The constructor also calls the setup methods on all registered plugins. It calls the setup() method.

Creates an instance. Also calls the setup methods on all registered plugins. It calls the setup() method.

Parameters

misc_settings – further settings

benchmark(block: RunProgramBlock, runs: int, cpuset: Optional[CPUSet] = None, set_id: int = 0, timeout: float = - 1) BenchmarkingResultBlock[source]

Benchmark the passed program block “runs” times and return the benchmarking results.

Parameters
  • block – run program block to benchmark

  • runs – number of benchmarking runs

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarked block should be executed in

  • timeout – timeout or -1 if no timeout is given

Returns

object that contains a dictionary of properties with associated raw run data

block_type_scheme = Dict({}, False, keys=Any, values=Any, default = {})

Type scheme for the program block configuration

default = []

Name(s) of the class(es) used by default. Type depends on the use_list property.

classmethod get_full_block_typescheme() Type[source]
get_property_descriptions() Dict[str, str][source]

Returns a dictionary that maps some properties to their short descriptions.

get_used_plugins() List[str][source]
misc_settings

Further settings

plugin_synonym = ('run driver plugin', 'run driver plugins')

Singular and plural version of the word that is used in the documentation for the registered entities

registry = {}

Registered classes (indexed by their name)

runs_benchmarks = True
settings_key_path = 'run/plugins'

Used settings key path

setup()[source]

Call the setup() method on all used plugins for this driver.

classmethod store_example_config(file: str, comment_out_defaults: bool = False)[source]
store_files = True
teardown()[source]

Call the teardown() method on all used plugins for this driver.

use_key = 'active'

Used key that sets which registered class is currently used

use_list = True

Allow more than one class to used at a specific moment in time

used_plugins

Used and active plugins

exception temci.run.run_driver.BenchmarkingError[source]

Bases: RuntimeError

Thrown when the benchmarking of a program block fails.

exception temci.run.run_driver.BenchmarkingProgramError(recorded_error: RecordedProgramError)[source]

Bases: BenchmarkingError

Thrown when the benchmarked program fails

class temci.run.run_driver.BenchmarkingResultBlock(data: Dict[str, List[Union[int, float]]] = None, error: BaseException = None, recorded_error: RecordedError = None)[source]

Bases: object

Result of the benchmarking of one block. It includes the error object if an error occurred.

Creates an instance.

Parameters
  • data – measured data per measured property

  • error – exception object if something went wrong during benchmarking

Returns

add_run_data(data: Dict[str, Union[int, float, List[Union[int, float]]]])[source]

Add data.

Parameters

data – data to be added (measured data per property)

data

Measured data per measured property

error

Exception object if something went wrong during benchmarking

properties() List[str][source]

Get a list of the measured properties

class temci.run.run_driver.CPUSpecExecRunner(block: RunProgramBlock)[source]

Bases: ExecRunner

A runner that uses a tool that runs the SPEC CPU benchmarks and parses the resulting files.

To use this runner with name {name} either set the runner property of a run configuration or the setting under the key run/exec_misc/runner to its name.

The runner is configured by modifying the spec.py property of a run configuration. This configuration has the following structure:

# File patterns (the newest file will be used)
files:         ListOrTuple(Str())
            default: [result/CINT2000.*.raw, result/CFP2000.*.raw]

# Randomize the assembly during compiling?
randomize:         Bool()

Creates an instance.

Parameters

block – run program block to measure

Raises

KeyboardInterrupt – if the runner can’t be used (e.g. if the used tool isn’t installed or compiled)

misc_options = # File patterns (the newest file will be used) files:         ListOrTuple(Str())             default: [result/CINT2000.*.raw, result/CFP2000.*.raw]  # Randomize the assembly during compiling? randomize:         Bool()

Type scheme of the options for this type of runner

name = 'spec.py'

Name of the runner

parse_result_impl(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :return: the modified benchmarking result block

setup_block(block: RunProgramBlock, cpuset: Optional[CPUSet] = None, set_id: int = 0)[source]

Configure the passed copy of a run program block (e.g. the run command).

Parameters
  • block – modified copy of a block

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarking takes place in

class temci.run.run_driver.ExecRunDriver(misc_settings: Optional[dict] = None)[source]

Bases: AbstractRunDriver

Implements a simple run driver that just executes one of the passed run_cmds in each benchmarking run. It measures the time using the perf stat tool (runner=perf_stat).

The constructor calls the setup method.

Configuration format:

# Argument passed to all benchmarked commands by replacing $ARGUMENT with this value in the command
argument: ''

# Parse the program output as a YAML dictionary of that gives for a specific property a measurement.
# Not all runners support it.
parse_output: false

# Order in which the plugins are used, plugins that do not appear in this list are used before all
# others
plugin_order: [drop_fs_caches, sync, sleep, preheat, flush_cpu_caches]

# Enable other plugins by default: none =  (enable none by default); all = cpu_governor,disable_swap,s
# ync,stop_start,other_nice,nice,disable_aslr,disable_ht,cpuset,disable_turbo_boost (enable all, might
# freeze your system); usable =
# cpu_governor,disable_swap,sync,nice,disable_aslr,disable_ht,cpuset,disable_turbo_boost (like 'all'
# but doesn't affect other processes)
preset: none

# Pick a random command if more than one run command is passed.
random_cmd: true

# If not '' overrides the runner setting for each program block
runner: ''

This run driver can be configured under the settings key run/exec_misc.

To use this run driver set the currently used run driver (at key run/driver) to its name (exec).

The default run driver is exec.

Block configuration format for the run configuration:

# Optional build config to integrate the build step into the run step
build_config:         Either(Dict(, keys=Any, values=Any, default = {})|non existent)

# Optional attributes that describe the block
attributes:     
    description:         Optional(Str())

    # Tags of this block
    tags:         ListOrTuple(Str())

run_config:     
    # Command to benchmark, adds to run_cmd
    cmd:         Str()

    # Command to append before the commands to benchmark
    cmd_prefix:         List(Str())

    # Execution directories for each command
    cwd:         Either(List(Str())|Str())
                default: .

    # Disable the address space layout randomization
    disable_aslr:         Bool()

    # Override all other max runspecifications if > -1
    max_runs:         Int(constraint=<function>)
                default: -1

    # Override all other min runspecifications if > -1
    min_runs:         Int(constraint=<function>)
                default: -1

    # Parse the program output as a YAML dictionary of that gives for a specific property a
    # measurement. Not all runners support it.
    parse_output:         Bool()

    # Used revision (or revision number).-1 is the current revision, checks out the revision
    revision:         Either(Int(constraint=<function>)|Str())
                default: -1

    # Commands to benchmark
    run_cmd:         Either(List(Str())|Str())

    # Used runner
    runner:         ExactEither()
                default: time

    # Override min run and max runspecifications if > -1
    runs:         Int(constraint=<function>)
                default: -1

    # Measured properties for rusage that are stored in the benchmarking result
    rusage_properties:         ValidRusagePropertyList()

    # Environment variables
    env:         Dict(, keys=Str(), values=Any, default = {})

    # Configuration for the output and return code validator
    validator:     
        # Program error output without ignoring line breaks and spaces at the beginning and the end
        expected_err_output:         Optional(Str())

        # Strings that should be present in the program error output
        expected_err_output_contains:         Either(List(Str())|Str())

        # Program output without ignoring line breaks and spaces at the beginning and the end
        expected_output:         Optional(Str())

        # Strings that should be present in the program output
        expected_output_contains:         Either(List(Str())|Str())

        # Allowed return code(s)
        expected_return_code:         Either(List(Int())|Int())

        # Strings that shouldn't be present in the program output
        unexpected_err_output_contains:         Either(List(Str())|Str())

        # Strings that shouldn't be present in the program output
        unexpected_output_contains:         Either(List(Str())|Str())

Creates an instance. Also calls the setup methods on all registered plugins. It calls the setup() method.

Parameters

misc_settings – further settings

class ExecResult(time, stderr, stdout, rusage)

Bases: tuple

A simple named tuple named ExecResult with to properties: time, stderr and stdout

property rusage

Alias for field number 3

property stderr

Alias for field number 1

property stdout

Alias for field number 2

property time

Alias for field number 0

benchmark(block: RunProgramBlock, runs: int, cpuset: Optional[CPUSet] = None, set_id: int = 0, timeout: float = - 1) BenchmarkingResultBlock[source]

Benchmark the passed program block “runs” times and return the benchmarking results.

Parameters
  • block – run program block to benchmark

  • runs – number of benchmarking runs

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarked block should be executed in

  • timeout – timeout or -1 if no timeout is given

Returns

object that contains a dictionary of properties with associated raw run data

block_type_scheme = # Command to benchmark, adds to run_cmd cmd:         Str()  # Command to append before the commands to benchmark cmd_prefix:         List(Str())  # Execution directories for each command cwd:         Either(List(Str())|Str())             default: .  # Disable the address space layout randomization disable_aslr:         Bool()  # Override all other max runspecifications if > -1 max_runs:         Int(constraint=<function>)             default: -1  # Override all other min runspecifications if > -1 min_runs:         Int(constraint=<function>)             default: -1  # Parse the program output as a YAML dictionary of that gives for a specific property a measurement. # Not all runners support it. parse_output:         Bool()  # Used revision (or revision number).-1 is the current revision, checks out the revision revision:         Either(Int(constraint=<function>)|Str())             default: -1  # Commands to benchmark run_cmd:         Either(List(Str())|Str())  # Used runner runner:         ExactEither('perf_stat'|'rusage'|'spec'|'spec.py'|'time'|'output')             default: time  # Override min run and max runspecifications if > -1 runs:         Int(constraint=<function>)             default: -1  # Measured properties for rusage that are stored in the benchmarking result rusage_properties:         ValidRusagePropertyList()  # Environment variables env:         Dict(, keys=Str(), values=Any, default = {})  output:         Dict({}, False, keys=Any, values=Any, default = {})  perf_stat:          # Measured properties. The number of properties that can be measured at once is limited.     properties:         List(Str())                 default: [wall-clock, cycles, cpu-clock, task-clock, instructions, branch-misses, cache-references]          # If runner=perf_stat make measurements of the program repeated n times. Therefore scale the     # number of times a program is benchmarked.     repeat:         Int(constraint=<function>)                 default: 1  rusage:          # Measured properties that are stored in the benchmarking result     properties:         ValidRusagePropertyList()                 default: [idrss, inblock, isrss, ixrss, majflt, maxrss, minflt, msgrcv, msgsnd, nivcsw, nsignals,               nswap, nvcsw, oublock, stime, utime]  spec:          # Base property path that all other paths are relative to.     base_path:         Str()          # Code that is executed for each matched path. The code should evaluate to the actual measured     # value for the path.it can use the function get(sub_path: str = '') and the modules pytimeparse,     # numpy, math, random, datetime and time.     code:         Str()                 default: get()          # SPEC result file     file:         Str()          # Regexp matching the base property path for each measured property     path_regexp:         Str()                 default: .*  spec.py:          # File patterns (the newest file will be used)     files:         ListOrTuple(Str())                 default: [result/CINT2000.*.raw, result/CFP2000.*.raw]          # Randomize the assembly during compiling?     randomize:         Bool()  time:          # Measured properties that are included in the benchmarking results     properties:         ValidTimePropertyList()                 default: [utime, stime, etime, avg_mem_usage, max_res_set, avg_res_set]  # Configuration for the output and return code validator validator:          # Program error output without ignoring line breaks and spaces at the beginning and the end     expected_err_output:         Optional(Str())          # Strings that should be present in the program error output     expected_err_output_contains:         Either(List(Str())|Str())          # Program output without ignoring line breaks and spaces at the beginning and the end     expected_output:         Optional(Str())          # Strings that should be present in the program output     expected_output_contains:         Either(List(Str())|Str())          # Allowed return code(s)     expected_return_code:         Either(List(Int())|Int())          # Strings that shouldn't be present in the program output     unexpected_err_output_contains:         Either(List(Str())|Str())          # Strings that shouldn't be present in the program output     unexpected_output_contains:         Either(List(Str())|Str())

Type scheme for the program block configuration

default = []

Name(s) of the class(es) used by default. Type depends on the use_list property.

get_property_descriptions() Dict[str, str][source]

Returns a dictionary that maps some properties to their short descriptions.

classmethod get_runner(block: RunProgramBlock) ExecRunner[source]

Create the suitable runner for the passed run program block.

Parameters

block – passed run program block

get_used_plugins() List[str][source]

Get the list of name of the used plugins (use_list=True) or the names of the used plugin (use_list=False).

classmethod register_runner() Callable[[type], type][source]

Decorator to register a runner (has to be sub class of ÈxecRunner).

registry = {}

Registered classes (indexed by their name)

runners = {'output': <class 'temci.run.run_driver.OutputExecRunner'>, 'perf_stat': <class 'temci.run.run_driver.PerfStatExecRunner'>, 'rusage': <class 'temci.run.run_driver.RusageExecRunner'>, 'spec': <class 'temci.run.run_driver.SpecExecRunner'>, 'spec.py': <class 'temci.run.run_driver.CPUSpecExecRunner'>, 'time': <class 'temci.run.run_driver.TimeExecRunner'>}

Dictionary mapping a runner name to a runner class

settings_key_path = 'run/exec_plugins'

Used settings key path

teardown()[source]

Call the teardown() method on all used plugins for this driver.

use_key = 'exec_active'

Used key that sets which registered class is currently used

use_list = True

Allow more than one class to used at a specific moment in time

class temci.run.run_driver.ExecRunner(block: RunProgramBlock)[source]

Bases: object

Base class for runners for the ExecRunDriver. A runner deals with creating the commands that actually measure a program and parse their outputs.

Creates an instance.

Parameters

block – run program block to measure

Raises

KeyboardInterrupt – if the runner can’t be used (e.g. if the used tool isn’t installed or compiled)

get_property_descriptions() Dict[str, str][source]

Returns a dictionary that maps some properties to their short descriptions.

misc

Options for this runner

misc_options = Dict({}, False, keys=Any, values=Any, default = {})

Type scheme of the options for this type of runner

name = None

Name of the runner

parse_result(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None, parse_output: bool = False) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :param parse_output: parse standard out to get additional properties :return: the modified benchmarking result block

parse_result_impl(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :return: the modified benchmarking result block

setup_block(block: RunProgramBlock, cpuset: Optional[CPUSet] = None, set_id: int = 0)[source]

Configure the passed copy of a run program block (e.g. the run command).

Parameters
  • block – modified copy of a block

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarking takes place in

supports_parsing_out = False

Is the captured output on standard out useful for parsing

class temci.run.run_driver.ExecValidator(config: dict)[source]

Bases: object

Output validator.

Configuration:

# Program error output without ignoring line breaks and spaces at the beginning and the end
expected_err_output:         Optional(Str())

# Strings that should be present in the program error output
expected_err_output_contains:         Either(List(Str())|Str())

# Program output without ignoring line breaks and spaces at the beginning and the end
expected_output:         Optional(Str())

# Strings that should be present in the program output
expected_output_contains:         Either(List(Str())|Str())

# Allowed return code(s)
expected_return_code:         Either(List(Int())|Int())

# Strings that shouldn't be present in the program output
unexpected_err_output_contains:         Either(List(Str())|Str())

# Strings that shouldn't be present in the program output
unexpected_output_contains:         Either(List(Str())|Str())

Creates an instance.

Parameters

config – validator configuration

config

Validator configuration

config_type_scheme = # Program error output without ignoring line breaks and spaces at the beginning and the end expected_err_output:         Optional(Str())  # Strings that should be present in the program error output expected_err_output_contains:         Either(List(Str())|Str())  # Program output without ignoring line breaks and spaces at the beginning and the end expected_output:         Optional(Str())  # Strings that should be present in the program output expected_output_contains:         Either(List(Str())|Str())  # Allowed return code(s) expected_return_code:         Either(List(Int())|Int())  # Strings that shouldn't be present in the program output unexpected_err_output_contains:         Either(List(Str())|Str())  # Strings that shouldn't be present in the program output unexpected_output_contains:         Either(List(Str())|Str())

Configuration type scheme

validate(cmd: str, out: str, err: str, return_code: int)[source]

Validate the passed program output, error output and return code.

Parameters
  • cmd – program command for better error messages

  • out – passed program output

  • err – passed program error output

  • return_code – passed program return code

Raises

BenchmarkingError – if the check failed

temci.run.run_driver.Number

Numeric value

alias of Union[int, float]

class temci.run.run_driver.OutputExecRunner(block: RunProgramBlock)[source]

Bases: ExecRunner

Parses the output of the called command as YAML dictionary (or list of dictionaries) populate the benchmark results (string key and int or float value).

For the simplest case, a program just outputs something like time: 1000.0.

To use this runner with name {name} either set the runner property of a run configuration or the setting under the key run/exec_misc/runner to its name.

Creates an instance.

Parameters

block – run program block to measure

Raises

KeyboardInterrupt – if the runner can’t be used (e.g. if the used tool isn’t installed or compiled)

get_property_descriptions() Dict[str, str][source]

Returns a dictionary that maps some properties to their short descriptions.

misc_options = Dict({}, False, keys=Any, values=Any, default = {})

Type scheme of the options for this type of runner

name = 'output'

Name of the runner

parse_result_impl(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :return: the modified benchmarking result block

setup_block(block: RunProgramBlock, cpuset: Optional[CPUSet] = None, set_id: int = 0)[source]

Configure the passed copy of a run program block (e.g. the run command).

Parameters
  • block – modified copy of a block

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarking takes place in

class temci.run.run_driver.PerfStatExecRunner(block: RunProgramBlock)[source]

Bases: ExecRunner

Runner that uses perf stat for measurements.

To use this runner with name {name} either set the runner property of a run configuration or the setting under the key run/exec_misc/runner to its name.

This runner supports the parse_output option.

The runner is configured by modifying the perf_stat property of a run configuration. This configuration has the following structure:

# Measured properties. The number of properties that can be measured at once is limited.
properties:         List(Str())
            default: [wall-clock, cycles, cpu-clock, task-clock, instructions, branch-misses, cache-references]

# If runner=perf_stat make measurements of the program repeated n times. Therefore scale the number of
# times a program is benchmarked.
repeat:         Int(constraint=<function>)
            default: 1

Creates an instance.

Parameters

block – run program block to measure

Raises

KeyboardInterrupt – if the runner can’t be used (e.g. if the used tool isn’t installed or compiled)

misc_options = # Measured properties. The number of properties that can be measured at once is limited. properties:         List(Str())             default: [wall-clock, cycles, cpu-clock, task-clock, instructions, branch-misses, cache-references]  # If runner=perf_stat make measurements of the program repeated n times. Therefore scale the number of # times a program is benchmarked. repeat:         Int(constraint=<function>)             default: 1

Type scheme of the options for this type of runner

name = 'perf_stat'

Name of the runner

parse_result_impl(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :return: the modified benchmarking result block

setup_block(block: RunProgramBlock, cpuset: Optional[CPUSet] = None, set_id: int = 0)[source]

Configure the passed copy of a run program block (e.g. the run command).

Parameters
  • block – modified copy of a block

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarking takes place in

supports_parsing_out = True

Is the captured output on standard out useful for parsing

class temci.run.run_driver.RunDriverRegistry[source]

Bases: AbstractRegistry

The registry for run drivers.

The used run driver can be configured by editing the settings key run/driver. Possible run drivers are ‘exec’ and ‘shell’

default = 'exec'

Name(s) of the class(es) used by default. Type depends on the use_list property.

plugin_synonym = ('run driver', 'run drivers')

Singular and plural version of the word that is used in the documentation for the registered entities

classmethod register(name: str, klass: type, misc_type: Type, deprecated: bool = False)[source]

Registers a new class. The constructor of the class gets as first argument the misc settings.

Parameters
  • name – common name of the registered class

  • klass – actual class

  • misc_type – type scheme of the {name}_misc settings

  • misc_default – default value of the {name}_misc settings

  • deprecated – is the registered class deprecated and should not be used?

registry = {'exec': <class 'temci.run.run_driver.ExecRunDriver'>, 'shell': <class 'temci.run.run_driver.ShellRunDriver'>}

Registered classes (indexed by their name)

settings_key_path = 'run'

Used settings key path

use_key = 'driver'

Used key that sets which registered class is currently used

use_list = False

Allow more than one class to used at a specific moment in time

class temci.run.run_driver.RunProgramBlock(id: int, data: Dict[str, Any], attributes: Dict[str, str], run_driver_class: Optional[type] = None)[source]

Bases: object

An object that contains every needed information of a program block.

Creates an instance.

Parameters
  • data – run driver configuration for this run program block

  • attributes – attributes of this run program block

  • run_driver_class – used type of run driver with this instance

attributes

Describing attributes of this run program block

copy() RunProgramBlock[source]

Copy this run program block. Deep copies the data and uses the same type scheme and attributes.

data

Run driver configuration

description() str[source]
classmethod from_dict(id: int, data: Dict, run_driver: Optional[type] = None)[source]

Structure of data:

{
   "attributes": {"attr1": ..., ...},
   "run_config": {"prop1": ..., ...},
   "build_config": {"prop1": ..., ...}
}
Parameters
  • id – id of the block (only used to track them later)

  • data – used data

  • run_driver – used RunDriver subclass

Returns

new RunProgramBlock

id

Id of this run program block

is_enqueued

Is this program block enqueued in a run worker pool queue?

run_driver_class

Used type of run driver

to_dict() Dict[source]

Serializes this instance into a data structure that is accepted by the from_dict method.

type_scheme

Configuration type scheme of the used run driver

class temci.run.run_driver.RusageExecRunner(block: RunProgramBlock)[source]

Bases: ExecRunner

Runner that uses the getrusage(2) function to obtain resource measurements.

To use this runner with name {name} either set the runner property of a run configuration or the setting under the key run/exec_misc/runner to its name.

The runner is configured by modifying the rusage property of a run configuration. This configuration has the following structure:

# Measured properties that are stored in the benchmarking result
properties:         ValidRusagePropertyList()
            default: [idrss, inblock, isrss, ixrss, majflt, maxrss, minflt, msgrcv, msgsnd, nivcsw, nsignals,
          nswap, nvcsw, oublock, stime, utime]

Creates an instance.

Parameters

block – run program block to measure

Raises

KeyboardInterrupt – if the runner can’t be used (e.g. if the used tool isn’t installed or compiled)

get_property_descriptions() Dict[str, str][source]

Returns a dictionary that maps some properties to their short descriptions.

misc_options = # Measured properties that are stored in the benchmarking result properties:         ValidRusagePropertyList()             default: [idrss, inblock, isrss, ixrss, majflt, maxrss, minflt, msgrcv, msgsnd, nivcsw, nsignals,           nswap, nvcsw, oublock, stime, utime]

Type scheme of the options for this type of runner

name = 'rusage'

Name of the runner

parse_result_impl(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :return: the modified benchmarking result block

setup_block(block: RunProgramBlock, cpuset: Optional[CPUSet] = None, set_id: int = 0)[source]

Configure the passed copy of a run program block (e.g. the run command).

Parameters
  • block – modified copy of a block

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarking takes place in

class temci.run.run_driver.ShellRunDriver(misc_settings: Optional[dict] = None)[source]

Bases: ExecRunDriver

Implements a run driver that runs the benched command a single time with redirected in- and output. It can be used to run own benchmarking commands inside a sane benchmarking environment

The constructor calls the setup method.

Configuration format:

# Order in which the plugins are used, plugins that do not appear in this list are used before all
# others
plugin_order: [drop_fs_caches, sync, sleep, preheat, flush_cpu_caches]

# Enable other plugins by default: none =  (enable none by default); all = cpu_governor,disable_swap,s
# ync,stop_start,other_nice,nice,disable_aslr,disable_ht,cpuset,disable_turbo_boost (enable all, might
# freeze your system); usable =
# cpu_governor,disable_swap,sync,nice,disable_aslr,disable_ht,cpuset,disable_turbo_boost (like 'all'
# but doesn't affect other processes)
preset: none

This run driver can be configured under the settings key run/shell_misc.

To use this run driver set the currently used run driver (at key run/driver) to its name (shell). Another usable run driver is exec. The default run driver is exec.

Block configuration format for the run configuration:

# Optional build config to integrate the build step into the run step
build_config:         Either(Dict(, keys=Any, values=Any, default = {})|non existent)

# Optional attributes that describe the block
attributes:     
    description:         Optional(Str())

    # Tags of this block
    tags:         ListOrTuple(Str())

run_config:     
    # Execution directory
    cwd:         Either(List(Str())|Str())
                default: .

    # Command to run
    run_cmd:         Str()
                default: sh

    # Environment variables
    env:         Dict(, keys=Str(), values=Any, default = {})

Creates an instance. Also calls the setup methods on all registered plugins. It calls the setup() method.

Parameters

misc_settings – further settings

benchmark(block: RunProgramBlock, runs: int, cpuset: Optional[CPUSet] = None, set_id: int = 0, timeout: float = - 1) BenchmarkingResultBlock[source]

Benchmark the passed program block “runs” times and return the benchmarking results.

Parameters
  • block – run program block to benchmark

  • runs – number of benchmarking runs

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarked block should be executed in

  • timeout – timeout or -1 if no timeout is given

Returns

object that contains a dictionary of properties with associated raw run data

block_type_scheme = # Execution directory cwd:         Either(List(Str())|Str())             default: .  # Command to run run_cmd:         Str()             default: sh  # Environment variables env:         Dict(, keys=Str(), values=Any, default = {})

Type scheme for the program block configuration

runs_benchmarks = False
store_files = False
teardown()[source]

Call the teardown() method on all used plugins for this driver.

class temci.run.run_driver.SpecExecRunner(block: RunProgramBlock)[source]

Bases: ExecRunner

Runner for SPEC like single benchmarking suites. It works with resulting property files, in which the properties are colon separated from their values.

To use this runner with name {name} either set the runner property of a run configuration or the setting under the key run/exec_misc/runner to its name.

The runner is configured by modifying the spec property of a run configuration. This configuration has the following structure:

# Base property path that all other paths are relative to.
base_path:         Str()

# Code that is executed for each matched path. The code should evaluate to the actual measured value
# for the path.it can use the function get(sub_path: str = '') and the modules pytimeparse, numpy,
# math, random, datetime and time.
code:         Str()
            default: get()

# SPEC result file
file:         Str()

# Regexp matching the base property path for each measured property
path_regexp:         Str()
            default: .*

Creates an instance.

Parameters

block – run program block to measure

Raises

KeyboardInterrupt – if the runner can’t be used (e.g. if the used tool isn’t installed or compiled)

misc_options = # Base property path that all other paths are relative to. base_path:         Str()  # Code that is executed for each matched path. The code should evaluate to the actual measured value # for the path.it can use the function get(sub_path: str = '') and the modules pytimeparse, numpy, # math, random, datetime and time. code:         Str()             default: get()  # SPEC result file file:         Str()  # Regexp matching the base property path for each measured property path_regexp:         Str()             default: .*

Type scheme of the options for this type of runner

name = 'spec'

Name of the runner

parse_result_impl(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :return: the modified benchmarking result block

setup_block(block: RunProgramBlock, cpuset: Optional[CPUSet] = None, set_id: int = 0)[source]

Configure the passed copy of a run program block (e.g. the run command).

Parameters
  • block – modified copy of a block

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarking takes place in

class temci.run.run_driver.TimeExecRunner(block: RunProgramBlock)[source]

Bases: ExecRunner

Uses the GNU ``time``tool and is mostly equivalent to the rusage runner but more user friendly.

To use this runner with name {name} either set the runner property of a run configuration or the setting under the key run/exec_misc/runner to its name.

This runner supports the parse_output option.

The runner is configured by modifying the time property of a run configuration. This configuration has the following structure:

# Measured properties that are included in the benchmarking results
properties:         ValidTimePropertyList()
            default: [utime, stime, etime, avg_mem_usage, max_res_set, avg_res_set]

Creates an instance.

Parameters

block – run program block to measure

Raises

KeyboardInterrupt – if the runner can’t be used (e.g. if the used tool isn’t installed or compiled)

get_property_descriptions() Dict[str, str][source]

Returns a dictionary that maps some properties to their short descriptions.

misc_options = # Measured properties that are included in the benchmarking results properties:         ValidTimePropertyList()             default: [utime, stime, etime, avg_mem_usage, max_res_set, avg_res_set]

Type scheme of the options for this type of runner

name = 'time'

Name of the runner

parse_result_impl(exec_res: ExecResult, res: Optional[BenchmarkingResultBlock] = None) BenchmarkingResultBlock[source]

Parse the output of a program and turn it into benchmarking results.

Parameters
  • exec_res – program output

  • res – benchmarking result to which the extracted results should be added or None if they should be added

to an empty one :return: the modified benchmarking result block

setup_block(block: RunProgramBlock, cpuset: Optional[CPUSet] = None, set_id: int = 0)[source]

Configure the passed copy of a run program block (e.g. the run command).

Parameters
  • block – modified copy of a block

  • cpuset – used CPUSet instance

  • set_id – id of the cpu set the benchmarking takes place in

supports_parsing_out = True

Is the captured output on standard out useful for parsing

exception temci.run.run_driver.TimeoutException(cmd: str, timeout: float, out: str, err: str, ret_code: int)[source]

Bases: BenchmarkingProgramError

Thrown whenever a benchmarked program timeouts

class temci.run.run_driver.ValidPerfStatPropertyList[source]

Bases: Type

Checks for the value to be a valid perf stat measurement property list or the perf tool to be missing.

Creates an instance.

Parameters

completion_hints – completion hints for supported shells for this type instance

class temci.run.run_driver.ValidPropertyList(av_properties: Iterable[str])[source]

Bases: Type

Checks for the value to be a valid property list that contains only elements from a given list.

Creates an instance.

Parameters

av_properties – allowed list elements

av

Allowed list elements

class temci.run.run_driver.ValidRusagePropertyList[source]

Bases: ValidPropertyList

Checks for the value to be a valid rusage runner measurement property list.

Creates an instance.

Parameters

av_properties – allowed list elements

class temci.run.run_driver.ValidTimePropertyList[source]

Bases: ValidPropertyList

Checks for the value to be a valid time runner measurement property list.

Creates an instance.

Parameters

av_properties – allowed list elements

temci.run.run_driver.clean_output(output: str) str[source]

Remove everything after the header

temci.run.run_driver.filter_runs(blocks: List[Union[RunProgramBlock, RunData]], included: List[str]) List[RunProgramBlock][source]

Filter run blocks (all: include all), identified by their description or tag or their number in the file (starting at zero) and run datas (only identified by their description and tag). The include query can also consist of regular expressions

Parameters
  • blocks – blocks or run datas to filter

  • included – include query

Returns

filtered list

temci.run.run_driver.get_av_perf_stat_properties() List[str][source]

Returns the list of properties that are measurable with the used perf stat tool.

temci.run.run_driver.get_av_rusage_properties() Dict[str, str][source]

Returns the available properties for the RusageExecRunner mapped to their descriptions.

temci.run.run_driver.get_av_time_properties() Dict[str, str][source]

Returns the available properties for the TimeExecRunner mapped to their descriptions.

temci.run.run_driver.get_av_time_properties_with_format_specifiers() Dict[str, Tuple[str, str]][source]

Returns the available properties for the TimeExecRunner mapped to their descriptions and time format specifiers.

temci.run.run_driver.header() str[source]

A header to use for measurement formatting

temci.run.run_driver.is_perf_available() bool[source]

Is the perf tool available?

temci.run.run_driver.log_program_error(recorded_error: RecordedInternalError)[source]
temci.run.run_driver.time_file(_tmp=[]) str[source]

Returns the command used to execute the (GNU) time tool (not the built in shell tool).

temci.run.run_driver_plugin module

This module consists of run driver plugin implementations.

class temci.run.run_driver_plugin.AbstractRunDriverPlugin(misc_settings)[source]

Bases: object

A plugin for a run driver. It allows additional modifications. The object is instantiated before the benchmarking starts and used for the whole benchmarking runs.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = False

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

setup_block(block: RunProgramBlock, runs: int = 1)[source]

Called before each run program block is run “runs” time.

Parameters
  • block – run program block to modify

  • runs – number of times the program block is run (and measured) at once.

setup_block_run(block: RunProgramBlock)[source]

Called before each run program block is run.

Parameters

block – run program block to modify

teardown()[source]

Called after the whole benchmarking is finished.

teardown_block(block: RunProgramBlock)[source]

Called after each run program block is run.

Parameters

block – run program block

class temci.run.run_driver_plugin.CPUGovernor(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Allows the setting of the scaling governor of all cpu cores, to ensure that all use the same.

Configuration format:

# New scaling governor for all cpus
governor: performance

This run driver plugin can be configured under the settings key run/exec_plugins/cpu_governor_misc.

To use this run driver plugin add its name (cpu_governor) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/cpu_governor_active to true. Other usable run driver plugins are nice, env_randomize, preheat, other_nice, stop_start, sync, sleep, drop_fs_caches, disable_swap, disable_cpu_caches and flush_cpu_caches.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.CPUSet(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Enable cpusets, simply sets run/cpuset/active to true

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

class temci.run.run_driver_plugin.DisableASLR(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Disable address space randomization

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.DisableAmdTurbo(misc_settings)[source]

Bases: DisableTurboBoost

Disable amd turbo boost

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

class temci.run.run_driver_plugin.DisableCPUCaches(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Disable the L1 and L2 caches on x86 and x86-64 architectures. Uses a small custom kernel module (be sure to compile it via ‘temci setup –build_kernel_modules’).

Warning

slows program down significantly and has probably other weird consequences

Warning

this is untested

Warning

a linux-forum user declared: Disabling cpu caches gives you a pentium I like processor!!!

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.DisableHyperThreading(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Disable hyper-threading

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.DisableIntelTurbo(misc_settings)[source]

Bases: DisableTurboBoost

Disable intel turbo mode

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

class temci.run.run_driver_plugin.DisableSwap(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Disables swapping on the system before the benchmarking and enables it after.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.DisableTurboBoost(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Disable amd and intel turbo boost

Creates an instance.

Parameters

misc_settings – configuration of this plugin

CPU_PATHS = {'amd': ('/sys/devices/system/cpu/cpufreq/boost', <class 'int'>), 'intel': ('/sys/devices/system/cpu/intel_pstate/no_turbo', <function DisableTurboBoost.<lambda>>)}
needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.DiscardedRuns(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Sets run/discarded_runs

Configuration format:

# Number of discarded runs
runs: 1

This run driver plugin can be configured under the settings key run/exec_plugins/discarded_runs_misc.

To use this run driver plugin add its name (discarded_runs) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/discarded_runs_active to true. Other usable run driver plugins are nice, env_randomize, preheat, other_nice, stop_start, sync, sleep, drop_fs_caches, disable_swap, disable_cpu_caches, flush_cpu_caches, cpu_governor, disable_aslr, disable_ht, disable_turbo_boost, disable_intel_turbo, disable_amd_boost and cpuset.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

class temci.run.run_driver_plugin.DropFSCaches(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Drop page cache, directoy entries and inodes before every benchmarking run.

Configuration format:

# Free dentries and inodes
free_dentries_inodes: true

# Free the page cache
free_pagecache: true

This run driver plugin can be configured under the settings key run/exec_plugins/drop_fs_caches_misc.

To use this run driver plugin add its name (drop_fs_caches) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/drop_fs_caches_active to true. Other usable run driver plugins are nice, env_randomize, preheat, other_nice, stop_start, sync and sleep.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup_block(block: RunProgramBlock, runs: int = 1)[source]

Called before each run program block is run “runs” time.

Parameters
  • block – run program block to modify

  • runs – number of times the program block is run (and measured) at once.

class temci.run.run_driver_plugin.EnvRandomizePlugin(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Adds random environment variables.

Configuration format:

# Maximum length of each random key
key_max: 4096

# Maximum number of added random environment variables
max: 4

# Minimum number of added random environment variables
min: 4

# Maximum length of each random value
var_max: 4096

This run driver plugin can be configured under the settings key run/exec_plugins/env_randomize_misc.

To use this run driver plugin add its name (env_randomize) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/env_randomize_active to true. Another usable run driver plugin is nice.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

setup_block_run(block: RunProgramBlock, runs: int = 1)[source]

Called before each run program block is run.

Parameters

block – run program block to modify

class temci.run.run_driver_plugin.FlushCPUCaches(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Flushes the CPU caches on a x86 CPU using a small kernel module, see WBINVD

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup_block_run(block: RunProgramBlock)[source]

Called before each run program block is run.

Parameters

block – run program block to modify

class temci.run.run_driver_plugin.NicePlugin(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Allows the setting of the nice and ionice values of the benchmarking process.

Configuration format:

# Specify the name or number of the scheduling class to use;0 for none, 1 for realtime, 2 for best-
# effort, 3 for idle.
io_nice: 1

# Niceness values range from -20 (most favorable to the process) to 19 (least favorable to the
# process).
nice: -15

This run driver plugin can be configured under the settings key run/exec_plugins/nice_misc.

To use this run driver plugin add its name (nice) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/nice_active to true.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

needs_root_privileges = True

Does this plugin work only with root privileges?

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.OtherNicePlugin(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Allows the setting of the nice value of most other processes (as far as possible).

Configuration format:

# Processes with lower nice values are ignored.
min_nice: -10

# Niceness values for other processes.
nice: 19

This run driver plugin can be configured under the settings key run/exec_plugins/other_nice_misc.

To use this run driver plugin add its name (other_nice) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/other_nice_active to true. Other usable run driver plugins are nice, env_randomize and preheat.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.PreheatPlugin(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Preheats the system with a cpu bound task (calculating the inverse of a big random matrix with numpy).

Configuration format:

# Number of seconds to preheat the system with an cpu bound task
time: 10

# When to preheat
when: [before_each_run]

This run driver plugin can be configured under the settings key run/exec_plugins/preheat_misc.

To use this run driver plugin add its name (preheat) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/preheat_active to true. Other usable run driver plugins are nice and env_randomize.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

setup_block(block: RunProgramBlock, runs: int = 1)[source]

Called before each run program block is run “runs” time.

Parameters
  • block – run program block to modify

  • runs – number of times the program block is run (and measured) at once.

class temci.run.run_driver_plugin.SleepPlugin(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Sleep a given amount of time before the benchmarking begins.

See Gernot Heisers Systems Benchmarking Crimes: Make sure that the system is really quiescent when starting an experiment, leave enough time to ensure all previous data is flushed out.

Configuration format:

# Seconds to sleep
seconds: 10

This run driver plugin can be configured under the settings key run/exec_plugins/sleep_misc.

To use this run driver plugin add its name (sleep) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/sleep_active to true. Other usable run driver plugins are nice, env_randomize, preheat, other_nice, stop_start and sync.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

setup_block(block: RunProgramBlock, runs: int = 1)[source]

Called before each run program block is run “runs” time.

Parameters
  • block – run program block to modify

  • runs – number of times the program block is run (and measured) at once.

class temci.run.run_driver_plugin.StopStartPlugin(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Stop almost all other processes (as far as possible).

Configuration format:

# Each process which name (lower cased) starts with one of the prefixes is not ignored. Overrides the
# decision based on the min_id.
comm_prefixes: [ssh, xorg, bluetoothd]

# Each process which name (lower cased) starts with one of the prefixes is ignored. It overrides the
# decisions based on comm_prefixes and min_id.
comm_prefixes_ignored: [dbus, kworker]

# Just output the to be stopped processes but don't actually stop them?
dry_run: false

# Processes with lower id are ignored.
min_id: 1500

# Processes with lower nice values are ignored.
min_nice: -10

# Suffixes of processes names which are stopped.
subtree_suffixes: [dm, apache]

This run driver plugin can be configured under the settings key run/exec_plugins/stop_start_misc.

To use this run driver plugin add its name (stop_start) to the list at settings key run/exec_plugins/exec_active or set run/exec_plugins/stop_start_active to true. Other usable run driver plugins are nice, env_randomize, preheat and other_nice.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

parse_processes()[source]
setup()[source]

Called before the whole benchmarking starts (e.g. to set the “nice” value of the benchmarking process).

teardown()[source]

Called after the whole benchmarking is finished.

class temci.run.run_driver_plugin.SyncPlugin(misc_settings)[source]

Bases: AbstractRunDriverPlugin

Calls sync before each program execution.

Creates an instance.

Parameters

misc_settings – configuration of this plugin

setup_block_run(block: RunProgramBlock, runs: int = 1)[source]

Called before each run program block is run.

Parameters

block – run program block to modify

temci.run.run_processor module

class temci.run.run_processor.RunProcessor(runs: Optional[List[dict]] = None, append: Optional[bool] = None, show_report: Optional[bool] = None)[source]

Bases: object

This class handles the coordination of the whole benchmarking process. It is configured by setting the settings of the stats and run domain.

Important note: the constructor also setups the cpu sets and plugins that can alter the system, e.g. confine most processes on only one core. Be sure to call the teardown() or the benchmark() method to make the system usable again.

Creates an instance and setup everything.

Parameters
  • runs – list of dictionaries that represent run program blocks if None Settings()[“run/in”] is used

  • append – append to the old benchmarks if there are any in the result file?

  • show_report – show a short report after finishing the benchmarking?

append

Append to the old benchmarks if there are any in the result file?

benchmark()[source]

Benchmark and teardown.

block_run_count

Number of benchmarked blocks

build()[source]

Build before benchmarking, essentially calls temci build where necessary and modifies the run configs

discarded_runs

First n runs that are discarded

end_time

Unix time stamp of the point in time that the benchmarking can at most reach

erroneous_run_blocks

List of all failing run blocks (id and results till failing)

fixed_runs

Do a fixed number of benchmarking runs?

max_runs

Maximum number of benchmarking runs

maximum_of_max_runs() int[source]
maximum_of_min_runs() int[source]
min_runs

Minimum number of benchmarking runs

pool

Used run worker pool that abstracts the benchmarking

print_report() str[source]
recorded_error() bool[source]
run_block_size

Number of benchmarking runs that are done together

run_blocks

Run program blocks for each dictionary in runs`

runs

List of dictionaries that represent run program blocks

show_report

Show a short report after finishing the benchmarking?

shuffle

Randomize the order in which the program blocks are benchmarked.

start_time

Unix time stamp of the start of the benchmarking

stats_helper

Used stats helper to help with measurements

store()[source]

Store the result file

store_and_teardown()[source]

Teardown everything, store the result file, print a short report and send an email if configured to do so.

store_erroneous()[source]

Store the failing program blocks in a file ending with .erroneous.yaml.

store_often

Store the result file after each set of blocks is benchmarked

teardown()[source]

Teardown everything (make the system useable again)

temci.run.run_worker_pool module

This module consists of the abstract run worker pool class and several implementations.

class temci.run.run_worker_pool.AbstractRunWorkerPool(run_driver_name: Optional[str] = None, end_time: float = - 1)[source]

Bases: object

An abstract run worker pool that just deals with the hyper threading setting.

Create an instance.

Parameters

run_driver_name – name of the used run driver, if None the one configured in the settings is used

cpuset

Used cpu set instance

classmethod disable_hyper_threading() List[int][source]
classmethod enable_hyper_threading(disabled_cores: List[int])[source]
classmethod get_hyper_threading_cores() List[int][source]

Adapted from http://unix.stackexchange.com/a/223322

has_time_left() bool[source]
next_block_timeout() float[source]
parallel_number

Number of instances in which the benchmarks takes place in parallel

result_queue

Queue of benchmarking results. The queue items are tuples consisting of the benchmarked block, the benchmarking result and the blocks id.

results(expected_num: int) Iterator[Tuple[RunProgramBlock, BenchmarkingResultBlock, int]][source]

A generator for all available benchmarking results. The items of this generator are tuples consisting of the benchmarked block, the benchmarking result and the blocks id.

Parameters

expected_num – expected number of results

run_driver

Used run driver instance

submit(block: RunProgramBlock, id: int, runs: int)[source]

Submits the passed block for “runs” times benchmarking. It also sets the blocks is_enqueued property to True.

Parameters
  • block – passed run program block

  • id – id of the passed block

  • runs – number of individual benchmarking runs

submit_queue

Queue for submitted but not benchmarked run program blocks

teardown()[source]

Tears down the inherited run driver. This should be called if all benchmarking with this pool is finished.

time_left() float[source]

Does not work properly if self.end_time == -1

class temci.run.run_worker_pool.BenchmarkingThread(id: int, pool: ParallelRunWorkerPool, driver: AbstractRunDriver, cpuset: CPUSet)[source]

Bases: Thread

A thread that allows parallel benchmarking.

Creates an instance.

Parameters
  • id – id of this thread

  • pool – parent run worked pool

  • driver – use run driver instance

  • cpuset – used CPUSet instance

cpuset

Used CPUSet instance

driver

Used run driver instance

id

Id of this thread

pool

Parent run worker pool

run()[source]

Start the run loop. It fetches run program blocks from the pool’s submit queue, benchmarks them and stores the results in the pool’s result queue. It stops if stop is true.

stop

Stop the run loop?

class temci.run.run_worker_pool.ParallelRunWorkerPool(run_driver_name: Optional[str] = None, end_time: float = - 1)[source]

Bases: AbstractRunWorkerPool

This run worker pool implements the parallel benchmarking of program blocks. It uses a server-client-model to benchmark on different cpu cores.

Create an instance.

Parameters

run_driver_name – name of the used run driver, if None the one configured in the settings is used

results(expected_num: int) Iterator[Tuple[RunProgramBlock, BenchmarkingResultBlock, int]][source]

A generator for all available benchmarking results. The items of this generator are tuples consisting of the benchmarked block, the benchmarking result and the blocks id.

Parameters

expected_num – expected number of results

submit(block: RunProgramBlock, id: int, runs: int)[source]

Submits the passed block for “runs” times benchmarking. It also sets the blocks is_enqueued property to True.

Parameters
  • block – passed run program block

  • id – id of the passed block

  • runs – number of individual benchmarking runs

teardown()[source]

Tears down the inherited run driver. This should be called if all benchmarking with this pool is finished.

threads

Running benchmarking threads

temci.run.run_worker_pool.ResultGenerator

Return type of the run worker pool results method

alias of Iterator[Tuple[RunProgramBlock, BenchmarkingResultBlock, int]]

class temci.run.run_worker_pool.RunWorkerPool(run_driver_name: Optional[str] = None, end_time: float = - 1)[source]

Bases: AbstractRunWorkerPool

This run worker pool implements the sequential benchmarking of program blocks.

Create an instance.

Parameters

run_driver_name – name of the used run driver, if None the one configured in the settings is used

results(expected_num: int) Iterator[Tuple[RunProgramBlock, BenchmarkingResultBlock, int]][source]

A generator for all available benchmarking results. The items of this generator are tuples consisting of the benchmarked block, the benchmarking result and the blocks id.

Parameters

expected_num – expected number of results

submit(block: RunProgramBlock, id: int, runs: int)[source]

Submits the passed block for “runs” times benchmarking. It also sets the blocks is_enqueued property to True.

Parameters
  • block – passed run program block

  • id – id of the passed block

  • runs – number of individual benchmarking runs

teardown()[source]

Tears down the inherited run driver. This should be called if all benchmarking with this pool is finished.

Module contents

This module contains code to make the actual benchmarks.