simplebench.case moduleπŸ”—

Benchmark case declaration and execution.

class simplebench.case.Case(
*,
action: ActionRunner,
group: str = 'default',
title: str | None = None,
description: str | None = None,
iterations: int = 20,
warmup_iterations: int = 10,
rounds: int = 1,
min_time: float = 5.0,
max_time: float = 20.0,
variation_cols: dict[str, str] | None = None,
kwargs_variations: dict[str, list[Any]] | None = None,
runner: type[SimpleRunner] | None = None,
callback: ReporterCallback | None = None,
options: Iterable[ReporterOptions] | None = None,
)[source]πŸ”—

Bases: object

Declaration of a benchmark case.

A benchmark case defines the specific benchmark to be run, including the action to be performed, the parameters for the benchmark, and any variations of those parameters as well as the reporting group and title for the benchmark.

It also defines the number of iterations, warmup iterations, rounds, minimum and maximum time for the benchmark, the benchmark runner to use, and any callbacks to be invoked to process the results of the benchmark for reporting purposes.

The min_time, max_time, iterations, and warmup_iterations parameters control how the benchmark is executed and measured and interact with each other as follows when using the default SimpleRunner: - The benchmark will perform warmup_iterations iterations before starting the timing

and measurement phase. This is done to allow for any setup or caching effects to stabilize. This is separate from the main benchmark iterations and does not count towards the iterations count or the min_time/max_time limits.

  • The benchmark will run for at least min_time wall clock seconds, but will stop on

    completing the first iteration that ends after max_time seconds during the timing phase.

  • If the benchmark completes iterations iterations after min_time but before

    reaching max_time, it will stop.

This means that the benchmark will run for at least min_time seconds and for at least one iteration during the timing phase. If min_time is reached before iterations is completed, the benchmark will continue running until either iterations or max_time is completed (whichever happens first).

rounds specifies the number of times the action will be executed per iteration to get a better average. Each iteration will run the specified number of rounds after setup and before teardown. The timing for the iteration will be the average time taken for the rounds in that iteration.

This helps to reduce the impact of variability in execution time for a single run of the action for very fast actions. This suppresses the overhead of the loop and timer quantization in Python during the actual timing benchmark/measurement phase. Internally, the action is called rounds times in an unrolled loop for each iteration, and the average time per call is used for the iteration timing.

This removes the overhead of the loop and timer quantization in Python during the actual timing benchmark/measurement phase by aggregating multiple calls to the action within a single iteration without the overhead of looping constructs. This allows for more accurate timing of very fast actions by reducing the relative impact of loop overhead and timer resolution limitations.

The trade-off is that total number of action calls is now iterations * rounds, and the reported time per action call is an average over the rounds in each iteration. This can dramatically improve the accuracy of timing measurements for very fast actions, at the cost of increased total execution time for the benchmark due to the additional calls to the action.

The unrolled loop means that setup and teardown functions (if any) are called only once per iteration, not once per round. All rounds in the same iteration share the same setup/teardown context.

If your action is not extremely fast (~ 10 nanoseconds or faster), it is recommended to leave rounds at its default value of 1. If you do use it, you may want to run dual benchmarks with rounds=1 and rounds>1 to see how much the reported variability and other metrics change.

The Case class is designed to be immutable after creation. Once a Case instance is created, its properties cannot be directly changed. This immutability ensures that benchmark cases remain consistent throughout their lifecycle.

The results of the benchmark runs are stored in the results property, which is a list of Results objects. Each Results object corresponds to a specific combination of keyword argument variations.

Minimal ExampleπŸ”—
  from simplebench import (
      Case, SimpleRunner, Results, main)


  def my_benchmark_action(bench: SimpleRunner,
                          **kwargs) -> Results:
      # Perform benchmark action here
      def benchmark_operation():
          sum(range(1000))  # Example operation to benchmark

      return bench.run(benchmark_operation)


  if __name__ == '__main__':
      cases_list: list[Case] = [
          Case(action=my_benchmark_action)
      ]
      main(cases_list)
property action: ActionRunnerπŸ”—

The function to perform the benchmark.

The function must accept a bench parameter of type SimpleRunner and arbitrary keyword arguments (’**kwargs’) and return a Results object.

Example:

def my_benchmark_action(*, bench: SimpleRunner, **kwargs) -> Results:
    def setup_function(size: int) -> None:
        # Setup code goes here
        pass

    def teardown_function(size: int) -> None:
        # Teardown code goes here
        pass

    def action_function(size: int) -> None:
        # The code to benchmark goes here
        lst = list(range(size))

    # Perform the benchmark using the provided SimpleRunner instance
    results: Results = bench.run(
        n=kwargs.get('size', 1),
        setup=setup_function, teardown=teardown_function,
        action=action_function, **kwargs)
    return results
as_dict(
full_data: bool = False,
) dict[str, Any][source]πŸ”—

Returns the benchmark case and results as a JSON serializable dict.

Only the results statistics are included by default. To include full results data, set full_data to True.

Parameters:

full_data (bool) – Whether to include full results data. Defaults to False.

Returns:

A JSON serializable dict representation of the benchmark case and results.

Return type:

dict[str, Any]

property callback: ReporterCallback | NoneπŸ”—

A callback function for additional processing of a report.

A callback function to be called with the benchmark results in a reporter. This function should accept four arguments: the Case instance, the Section, the ReporterOption, and the output object. Leave as None if no callback is needed. (default: None)

property description: strπŸ”—

A brief description of the benchmark case.

If not specified, defaults to the docstring of the action function or β€˜(no description)’ if no docstring is available.

Cannot be blank.

property expanded_kwargs_variations: list[dict[str, Any]]πŸ”—

All combinations of keyword arguments from the specified kwargs_variations.

A mapping of keyword argument names to their variations.

Each key is a keyword argument name, and the value is a list of possible values.

When tests are run, the benchmark will be executed for each combination of the specified keyword argument variations. For example, if kwargs_variations is

kwargs_variations argument exampleπŸ”—
  ...
  kwargs_variations = {
          'size': [10, 100],
          'mode': ['fast', 'accurate']
      },
  ...

The benchmark will be run 4 times with the following combinations of keyword arguments:

Keyword (**kwargs) Argument CombinationsπŸ”—
1  {size=10, mode='fast'}
2  {size=10, mode='accurate'}
3  {size=100, mode='fast'}
4  {size=100, mode='accurate'}

The action function will be called with these keyword arguments accordingly and must accept them.

Returns:

A list of dictionaries, each representing a unique combination of keyword arguments.

Return type:

list[dict[str, Any]]

property group: strπŸ”—

The benchmark reporting group to which the benchmark case belongs for selection and reporting purposes.

Cannot be blank. It is used to categorize and filter benchmark cases.

property iterations: intπŸ”—

The number of iterations to run for the benchmark.

property kwargs_variations: dict[str, list[Any]]πŸ”—

Variations of keyword arguments for the benchmark.

Each key is a keyword argument name, and the value is the column label to use for that argument. Only keywords that are also in kwargs_variations can be used here. These fields will be added to the output of reporters that support them as columns of data with the specified labels.

When tests are run, the benchmark will be executed for each combination of the specified keyword argument variations. For example, if kwargs_variations is

kwargs_variations argument exampleπŸ”—
  ...
  kwargs_variations = {
          'size': [10, 100],
          'mode': ['fast', 'accurate']
      },
  ...

The benchmark will be run 4 times with the following combinations of keyword arguments:

Keyword (**kwargs) Argument CombinationsπŸ”—
1  {size=10, mode='fast'}
2  {size=10, mode='accurate'}
3  {size=100, mode='fast'}
4  {size=100, mode='accurate'}

The action function will be called with these keyword arguments accordingly and must accept them.

property max_time: floatπŸ”—

The maximum time for the benchmark in seconds.

property min_time: floatπŸ”—

The minimum time for the benchmark in seconds.

property options: list[ReporterOptions]πŸ”—

A list of additional options for the benchmark case.

property results: list[Results]πŸ”—

The benchmark list of Results for the case.

This is a read-only attribute. To add results, use the run method.

Returns:

A list of Results objects for each variation run of the benchmark case.

Return type:

list[Results]

property rounds: intπŸ”—

The number of rounds to run for the benchmark for each iteration.

Rounds are multiple runs of the entire benchmark to get a better average for an iteration. Each iteration will run the specified number of rounds after setup and before teardown. (default: 1)

run(
session: Session | None = None,
) None[source]πŸ”—

Run the benchmark tests.

This method will execute the benchmark for each combination of keyword arguments and collect the results. After running the benchmarks, the results will be stored in the self.results attribute.

If passed, the session’s tasks will be used to display progress, control verbosity, and pass CLI arguments to the benchmark runner.

Parameters:

session (Optional[Session]) – The session to use for the benchmark case.

property runner: type[SimpleRunner] | NoneπŸ”—

A custom runner class for the benchmark.

If None, the default SimpleRunner is used. (default: None)

A custom runner class must be a subclass of SimpleRunner and must have a method named run that accepts the same parameters as SimpleRunner.run and returns a Results object. The action function will be called with a bench parameter that is an instance of the custom runner.

It may also accept additional parameters to the run method as needed. If additional parameters are required, they must be specified in the action function signature.

section_mean(
section: Section,
) float[source]πŸ”—

Calculate the mean value for a specific section across all results.

This method computes the mean value for the specified section (either OPS or TIMING) across all benchmark results associated with this case.

This is a very β€˜hand-wavy’ mean calculation that simply averages the means of each result. It does not take into account the number of iterations or other statistical factors. It is intended to provide a rough estimate of the overall performance for the specified section for use in comparisons between successive benchmark runs in tests looking for large performance regressions. As such, it should not be used for any rigorous statistical analysis.

Parameters:

section (Section) – The section for which to calculate the mean.

Returns:

The mean value for the specified section.

Return type:

float

property title: strπŸ”—

The name of the benchmark case.

If not specified, defaults to the name of the action function. Cannot be blank.

static validate_action_signature(
action: ActionRunner,
) ActionRunner[source]πŸ”—

Validate that action has correct signature.

An action function must accept the following two parameters:
  • bench: SimpleRunner

  • **kwargs: Arbitrary keyword arguments

This is equivalent to the ActionRunner protocol.

Parameters:

action (ActionRunner) – The action function to validate.

Returns:

The validated action function.

Return type:

ActionRunner

Raises:

SimpleBenchTypeError – If the action is not callable or has an invalid signature.

static validate_kwargs_variations(
value: dict[str, list[Any]] | None,
) dict[str, list[Any]][source]πŸ”—

Validate the kwargs_variations dictionary.

Validates that the kwargs_variations is a dictionary where each key is a string that is a valid Python identifier, and each value is a non-empty list.

A shallow copy of the validated dictionary and the lists is performed before returning to prevent external modification.

Parameters:

value (dict[str, list[Any]] | None) – The kwargs_variations dictionary to validate. Defaults to {} if None.

Returns:

A shallow copy of the validated kwargs_variations dictionary or {} if not provided. The keys are strings that are valid Python identifiers, and the values are non-empty lists. The lists may contain any type of values.

Return type:

dict[str, list[Any]]

Raises:
  • SimpleBenchTypeError – If the kwargs_variations is not a dictionary or if any key is not a string that is a valid Python identifier.

  • SimpleBenchValueError – If any value is not a list or is an empty list.

static validate_options(
value: Iterable[ReporterOptions] | None,
) list[ReporterOptions][source]πŸ”—

Validate the options list.

Parameters:

value (Iterable[ReporterOption] | None) – The options iterable to validate or None.

Returns:

A shallow copy of the validated options as a list or an empty list if not provided.

Return type:

list[ReporterOptions]

Raises:

SimpleBenchTypeError – If options is not a list or if any entry is not a ReporterOption.

static validate_runner(
value: type[SimpleRunner] | None,
) type[SimpleRunner] | None[source]πŸ”—

Validate the runner class.

Parameters:

value (Optional[type[SimpleRunner]]) – The runner class to validate.

Returns:

The validated runner class or None.

Return type:

Optional[type[SimpleRunner]]

Raises:

SimpleBenchTypeError – If the runner is not a subclass of SimpleRunner or None.

static validate_time_range(
min_time: float,
max_time: float,
) None[source]πŸ”—

Validate that min_time < max_time for the case.

Parameters:
  • min_time (float) – The minimum time.

  • max_time (float) – The maximum time.

Raises:

SimpleBenchValueError – The min_time is greater than max_time.

static validate_variation_cols(
variation_cols: dict[str, str] | None,
kwargs_variations: dict[str, list[Any]],
) dict[str, str][source]πŸ”—

Validate the variation_cols dictionary.

Parameters:
  • variation_cols (dict[str, str] | None) – The variation_cols dictionary to validate or None.

  • kwargs_variations (dict[str, list[Any]]) – The kwargs_variations dictionary to validate against.

Returns:

A shallow copy of the validated variation_cols dictionary or {} if not provided. Each key is a keyword argument name from kwargs_variations, and each value is a non-blank string to be used as the column label for that argument in reports.

Return type:

dict[str, str]

Raises:
  • SimpleBenchTypeError – If the variation_cols is not a dictionary or if any key or value is not a string.

  • SimpleBenchValueError – If any key is not found in kwargs_variations or if any value is a blank string.

property variation_cols: dict[str, str]πŸ”—

Keyword arguments to be used for columns to denote kwarg variations.

Each key is a keyword argument name, and the value is the column label to use for that argument. Only keywords that are also in kwargs_variations can be used here. These fields will be added to the output of reporters that support them as columns of data with the specified labels.

Note that all keys in variation_cols must be present in kwargs_variations and updating it may require changes to both variation_cols and kwargs_variations_cols.

Updating variation_cols does not automatically update kwargs_variations, and vice versa.

Returns:

A dictionary mapping keyword argument names to column labels.

Return type:

dict[str, str]

property warmup_iterations: intπŸ”—

The number of warmup iterations to run before the benchmark.