Changelog¶
3.1.1 (2017-07-26)¶
3.1.0 (2017-07-21)¶
- Added “operations per second” (
opsfield inStats) metric – shows the call rate of code being tested. Contributed by Alexey Popravka in #78. - Added a
timefield incommit_info. Contributed by “varac” in #71. - Added a
author_timefield incommit_info. Contributed by “varac” in #75. - Fixed the leaking of credentials by masking the URL printed when storing data to elasticsearch.
- Added a –benchmark-netrc option to use credentials from a netrc file when storing data to elasticsearch. Both contributed by Andre Bianchi in #73.
- Fixed docs on hooks. Contributed by Andre Bianchi in #74.
- Remove git and hg as system dependencies when guessing the project name.
3.1.0a2 (2017-03-27)¶
machine_infonow contains more detailed information about the CPU, in particular the exact model. Contributed by Antonio Cuni in #61.- Added
benchmark.extra_info, which you can use to save arbitrary stuff in the JSON. Contributed by Antonio Cuni in the same PR as above. - Fix support for latest PyGal version (histograms). Contributed by Swen Kooij in #68.
- Added support for getting
commit_infowhen not running in the root of the repository. Contributed by Vara Canero in #69 <https://github.com/ionelmc/pytest-benchmark/pull/69> _. - Added short form for
--storage/--verboseoptions in CLI. - Added an alternate
pytest-benchmarkCLI bin (in addition topy.test-benchmark) to match the madness in pytest. - Fix some issues with –help` in CLI.
- Improved git remote parsing (for
commit_infoin JSON outputs). - Fixed default value for –benchmark-columns`.
- Fixed comparison mode (loading was done too late).
- Remove the project name from the autosave name. This will get the old brief naming from 3.0 back.
3.1.0a1 (2016-10-29)¶
- Added
--benchmark-columscommand line option. It selects what columns are displayed in the result table. Contributed by Antonio Cuni in #34. - Added support for grouping by specific test parametrization (
--benchmark-group-by=param:NAMEwhereNAMEis your param name). Contributed by Antonio Cuni in #37. - Added support for name or fullname in
--benchmark-sort. Contributed by Antonio Cuni in #37. - Changed signature for
pytest_benchmark_generate_jsonhook to take 2 new arguments:machine_infoandcommit_info. - Changed –benchmark-histogram` to plot groups instead of name-matching runs.
- Changed –benchmark-histogram` to plot exactly what you compared against. Now it’s
1:1with the compare feature. - Changed –benchmark-compare` to allow globs. You can compare against all the previous runs now.
- Changed –benchmark-group-by` to allow multiple values separated by comma.
Example:
--benchmark-group-by=param:foo,param:bar - Added a command line tool to compare previous data:
py.test-benchmark. It has two commands:list- Lists all the available files.compare- Displays result tables. Takes optional arguments:--sort=COL--group-by=LABEL--columns=LABELS--histogram=[FILENAME-PREFIX]
- Added
--benchmark-cprofilethat profiles last run of benchmarked function. Contributed by Petr Šebek. - Changed
--benchmark-storageso it now allows elasticsearch storage. It allows to store data to elasticsearch instead to json files. Contributed by Petr Šebek in #58.
3.0.0 (2015-11-08)¶
- Improved
--helptext for--benchmark-histogram,--benchmark-saveand--benchmark-autosave. - Benchmarks that raised exceptions during test now have special highlighting in result table (red background).
- Benchmarks that raised exceptions are not included in the saved data anymore (you can still get the old behavior back
by implementing
pytest_benchmark_generate_jsonin yourconftest.py). - The plugin will use pytest’s warning system for warnings. There are 2 categories:
WBENCHMARK-C(compare mode issues) andWBENCHMARK-U(usage issues). - The red warnings are only shown if
--benchmark-verboseis used. They still will be always be shown in the pytest-warnings section. - Using the benchmark fixture more than one time is disallowed (will raise exception).
- Not using the benchmark fixture (but requiring it) will issue a warning (
WBENCHMARK-U1).
3.0.0rc1 (2015-10-25)¶
- Changed
--benchmark-warmupto take optional value and automatically activate on PyPy (default value isauto). MAY BE BACKWARDS INCOMPATIBLE - Removed the version check in compare mode (previously there was a warning if current version is lower than what’s in the file).
3.0.0b3 (2015-10-22)¶
- Changed how comparison is displayed in the result table. Now previous runs are shown as normal runs and names get a special suffix indicating the origin. Eg: “test_foobar (NOW)” or “test_foobar (0123)”.
- Fixed sorting in the result table. Now rows are sorted by the sort column, and then by name.
- Show the plugin version in the header section.
- Moved the display of default options in the header section.
3.0.0b2 (2015-10-17)¶
- Add a
--benchmark-disableoption. It’s automatically activated when xdist is on - When xdist is on or statistics can’t be imported then
--benchmark-disableis automatically activated (instead of--benchmark-skip). BACKWARDS INCOMPATIBLE - Replace the deprecated
__multicall__with the new hookwrapper system. - Improved description for
--benchmark-max-time.
3.0.0b1 (2015-10-13)¶
- Tests are sorted alphabetically in the results table.
- Failing to import statistics doesn’t create hard failures anymore. Benchmarks are automatically skipped if import failure occurs. This would happen on Python 3.2 (or earlier Python 3).
3.0.0a4 (2015-10-08)¶
- Changed how failures to get commit info are handled: now they are soft failures. Previously it made the whole
test suite fail, just because you didn’t have
git/hginstalled.
3.0.0a3 (2015-10-02)¶
- Added progress indication when computing stats.
3.0.0a2 (2015-09-30)¶
- Fixed accidental output capturing caused by capturemanager misuse.
3.0.0a1 (2015-09-13)¶
- Added JSON report saving (the
--benchmark-jsoncommand line arguments). Based on initial work from Dave Collins in #8. - Added benchmark data storage(the
--benchmark-saveand--benchmark-autosavecommand line arguments). - Added comparison to previous runs (the
--benchmark-comparecommand line argument). - Added performance regression checks (the
--benchmark-compare-failcommand line argument). - Added possibility to group by various parts of test name (the –benchmark-compare-group-by` command line argument).
- Added historical plotting (the
--benchmark-histogramcommand line argument). - Added option to fine tune the calibration (the
--benchmark-calibration-precisioncommand line argument andcalibration_precisionmarker option). - Changed
benchmark_weaveto no longer be a context manager. Cleanup is performed automatically. BACKWARDS INCOMPATIBLE - Added
benchmark.weavemethod (alternative tobenchmark_weavefixture). - Added new hooks to allow customization:
pytest_benchmark_generate_machine_info(config)pytest_benchmark_update_machine_info(config, info)pytest_benchmark_generate_commit_info(config)pytest_benchmark_update_commit_info(config, info)pytest_benchmark_group_stats(config, benchmarks, group_by)pytest_benchmark_generate_json(config, benchmarks, include_data)pytest_benchmark_update_json(config, benchmarks, output_json)pytest_benchmark_compare_machine_info(config, benchmarksession, machine_info, compared_benchmark)
- Changed the timing code to:
- Tracers are automatically disabled when running the test function (like coverage tracers).
- Fixed an issue with calibration code getting stuck.
- Added pedantic mode via
benchmark.pedantic(). This mode disables calibration and allows a setup function.
2.5.0 (2015-06-20)¶
- Improved test suite a bit (not using cram anymore).
- Improved help text on the
--benchmark-warmupoption. - Made
warmup_iterationsavailable as a marker argument (eg:@pytest.mark.benchmark(warmup_iterations=1234)). - Fixed
--benchmark-verbose’s printouts to work properly with output capturing. - Changed how warmup iterations are computed (now number of total iterations is used, instead of just the rounds).
- Fixed a bug where calibration would run forever.
- Disabled red/green coloring (it was kinda random) when there’s a single test in the results table.
2.4.1 (2015-03-16)¶
- Fix regression, plugin was raising
ValueError: no option named 'dist'when xdist wasn’t installed.
2.4.0 (2015-03-12)¶
- Add a
benchmark_weaveexperimental fixture. - Fix internal failures when xdist plugin is active.
- Automatically disable benchmarks if xdist is active.
2.3.0 (2014-12-27)¶
Moved the warmup in the calibration phase. Solves issues with benchmarking on PyPy.
Added a
--benchmark-warmup-iterationsoption to fine-tune that.
2.2.0 (2014-12-26)¶
- Make the default rounds smaller (so that variance is more accurate).
- Show the defaults in the
--helpsection.
2.1.0 (2014-12-20)¶
- Simplify the calibration code so that the round is smaller.
- Add diagnostic output for calibration code (
--benchmark-verbose).
2.0.0 (2014-12-19)¶
- Replace the context-manager based API with a simple callback interface. BACKWARDS INCOMPATIBLE
- Implement timer calibration for precise measurements.
1.0.0 (2014-12-15)¶
- Use a precise default timer for PyPy.