Skip to content

mbalabanski/stabilizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stabilizer

Runtime performance analysis for c++ code.

This project was based off of Stabilizer (Github).

Idea

Sable aims to get an accurate assessment of a function's runtime in the presence of noise and factors like memory location in both the stack and heap, which can bias a function's performance evaluation.

To do this, multiple samples of the function are taken, with randomized stack and heap paddings created before function execution.

This method aims to approximate the true runtime of the function, regardless of noise and memory location.

Usage

Sable works by comparing the runtime of two functions by performing a student's t-test on both functions runtimes. This necessitates three conditions:

  1. Samples must be random
  2. The sample must approximate a normal distribution
  3. Each sample is independent of one another

These are important to keep in mind during usage, especially when setting parameters such as number of trials, which can satistfy condition #2, to be greater than 30, by the Central Limit Theorem

Adding Sable to a project

Clone this repository and make using:

$ git clone https://github.com/mbalabanski/stabilizer.git
$ cd stabilizer
$ mkdir build && cd build
$ cmake ..
$ make sable_lib

From this, add the object built into linking and add include/ to targeted include directories.

Finally, add:

#include <sable.hpp>

sable::compare_runtime

To compare the runtime of two functions, use sable::compare_runtime.

sable::TestResult sable::compare_runtime(
    void() function1, 
    void() function2, 
    float significance_level, 
    size_t number_of_trials
);

Notes on Usage:

significance_level - determines when null hypothesis should be rejected. Usually 0.01, 0.05 or 0.10.

number_of_trials - should at least be greater than 30

This returns a sable::TestResult struct, which contains 5 parameters: the runtime data for functions 1 and 2, the probability for the functions to have the same runtime, the student's t-test test statistic, and a summary of the hypotheses.

The hypotheses are summarized by the bitflags in the enum sable::HypothesisTest, where 0 means a failure to reject the null hypothesis, and a flag is set for the alternate hypotheses: Not Equal, Greater Than, and Less Than.

To print results of a test to stdout, use sable::output_test_result.

Example Usage

void run_func()
{
    std::this_thread::sleep_for(std::chrono::microseconds(10));
}

void run_func2()
{
    std::this_thread::sleep_for(std::chrono::microseconds(14));
}

int main()
{
    auto test_results = sable::compare_runtime(
        run_func, 
        run_func2, 
        0.05, // signficance level
        100 // # of trials
    );

    sable::output_test_result(test_results);

    return 0;
}

For more examples, see examples: examples/wait.cpp, examples/calc.cpp, and examples/confusion_matrix.cpp.

sable::compare_runtime_multithreaded

Similar to sable::compare_runtime, except functions run in parallel to speed up evaluation time.

sable::TestResult sable::compare_runtime_multithreaded(
    void() function1, 
    void() function2, 
    float significance_level, 
    size_t number_of_trials,
    size_t threads
);

Notes on Usage

Because this feature relies on executing functions in parallel, the function being evaluated must be pure, meaning that it shouldn't rely or modify outside variables during its execution.

This ensures that when evaluating the function's runtime, there is no race condition or deadlock when accessing outside variables.

Other notes on usage are the same as for sable::compare_runtime.

Usage

void run_func()
{
    std::this_thread::sleep_for(std::chrono::microseconds(10));
}

void run_func2()
{
    std::this_thread::sleep_for(std::chrono::microseconds(14));
}

int main()
{
    auto test_results = sable::compare_runtime_multithreaded(
        run_func, 
        run_func2, 
        0.05, // signficance level
        100, // # of trials
        std::thread::hardware_concurrency() // # of threads
    );

    sable::output_test_result(test_results);

    return 0;
}

sable::watch_function

Sable watch function compares the functions current runtime, to its previous runtime.

std::optional<TestResult> watch_function(
    const std::string& identifier, 
    void() func, 
    size_t trials,
    float alpha
);

Notes on Usage

identifier - must be unique for each call

alpha - significance level - determines when null hypothesis should be rejected. Usually 0.01, 0.05 or 0.10.

trials - should at least be greater than 30

Returns - an option of sable::TestResult

This function saves runtime data to a created file ./sable/[identifier].csv. This makes it necessary to have a unique identifier for each watch_function call.

In the event that no data file or sable directory exists, this function will create the file and path and write headings and runtime data to it.

This function will run a student's t-test (like compare_runtime) to determine if there is a difference in runtime from the previous execution.

Returns

This function returns nullopt when there is no existing data file (sable/[identifier].csv), or if it is unable to read from the file. Otherwise, it returns the results of the student's t-test.

Usage

void run_func()
{
    const size_t duration = 40; // change between compilations
    std::this_thread::sleep_for(std::chrono::microseconds(duration));
}

int main()
{
    auto test_results = sable::watch_function("WaitFunction", run_func, trials, 0.05);

    if (test_results)
        sable::output_test_result(test_results.value());

    return 0;
}

Further examples

For further usage, see examples/ directory.

Make any of the examples using make [ExampleName].

About

C++ Runtime Performance Analysis Library

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published