Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Oct 31, 2022

Bumps torchmetrics from 0.9.2 to 0.10.2.

Release notes

Sourced from torchmetrics's releases.

Fixed Performance

[0.10.2] - 2022-10-31

Changed

  • Changed in-place operation to out-of-place operation in pairwise_cosine_similarity (#1288)

Fixed

  • Fixed high memory usage for certain classification metrics when average='micro' (#1286)
  • Fixed precision problems when structural_similarity_index_measure was used with autocast (#1291)
  • Fixed slow performance for confusion matrix-based metrics (#1302)
  • Fixed restrictive dtype checking in spearman_corrcoef when used with autocast (#1303)

Contributors

@​SkafteNicki

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Minor patch release

[0.10.1] - 2022-10-21

Fixed

  • Fixed broken clone method for classification metrics (#1250)
  • Fixed unintentional downloading of nltk.punkt when lsum not in rouge_keys (#1258)
  • Fixed type casting in MAP metric between bool and float32 (#1150)

Contributors

@​dreaquil, @​SkafteNicki, @​stancld

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Large changes to classifications

TorchMetrics v0.10 is now out, significantly changing the whole classification package. This blog post will go over the reasons why the classification package needs to be refactored, what it means for our end users, and finally, what benefits it gives. A guide on how to upgrade your code to the recent changes can be found near the bottom.

Why the classification metrics need to change

We have for a long time known that there were some underlying problems with how we initially structured the classification package. Essentially, classification tasks can e divided into either binary, multiclass, or multilabel, and determining what task a user is trying to run a given metric on is hard just based on the input. The reason a package such as sklearn can do this is to only support input in very specific formats (no multi-dimensional arrays and no support for both integer and probability/logit formats).

This meant that some metrics, especially for binary tasks, could have been calculating something different than expected if the user were to provide another shape but the expected. This is against the core value of TorchMetrics, that our users, of course should trust that the metric they are evaluating is given the excepted result.

Additionally, classification metrics were missing consistency. For some, metrics num_classes=2 meant binary, and for others num_classes=1 meant binary. You can read more about the underlying reasons for this refactor in this and this issue.

The solution

The solution we went with was to split every classification metric into three separate metrics with the prefix binary_* , multiclass_* and multilabel_* . This solves a number of the above problems out of the box because it becomes easier for us to match our users' expectations for any given input shape. It additionally has some other benefits both for us as developers and ends users

... (truncated)

Changelog

Sourced from torchmetrics's changelog.

[0.10.2] - 2022-10-31

Changed

  • Changed in-place operation to out-of-place operation in pairwise_cosine_similarity (#1288)

Fixed

  • Fixed high memory usage for certain classification metrics when average='micro' (#1286)
  • Fixed precision problems when structural_similarity_index_measure was used with autocast (#1291)
  • Fixed slow performance for confusion matrix based metrics (#1302)
  • Fixed restrictive dtype checking in spearman_corrcoef when used with autocast (#1303)

[0.10.1] - 2022-10-21

Fixed

  • Fixed broken clone method for classification metrics (#1250)
  • Fixed unintentional downloading of nltk.punkt when lsum not in rouge_keys (#1258)
  • Fixed type casting in MAP metric between bool and float32 (#1150)

[0.10.0] - 2022-10-04

Added

  • Added a new NLP metric InfoLM (#915)
  • Added Perplexity metric (#922)
  • Added ConcordanceCorrCoef metric to regression package (#1201)
  • Added argument normalize to LPIPS metric (#1216)
  • Added support for multiprocessing of batches in PESQ metric (#1227)
  • Added support for multioutput in PearsonCorrCoef and SpearmanCorrCoef (#1200)

Changed

... (truncated)

Commits
  • bc7091f releasing 0.10.2
  • 558c61c Fix slow performance for confusion matrix based metrics (#1302)
  • 75df8e0 Fix autocast with spearman metric (#1303)
  • dccd432 Make LPIPS code example use the interval [-1, 1] (#1296)
  • b595576 Fix typo in LPIPS class docstring and in an error message (#1295)
  • c4ad4f6 Fix structural_similarity_index_measure with autocast (#1291)
  • a4eabd9 Fix inplace operation in pairwise_cosine_similarity (#1288)
  • f12e586 Reduce memory in classification metrics when average='micro' (#1286)
  • dd30b48 Fix mistakes in classification docs (#1287)
  • 9eea5c9 docs: Fix a typo in Label Ranking Loss (#1280)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [torchmetrics](https://github.com/Lightning-AI/metrics) from 0.9.2 to 0.10.2.
- [Release notes](https://github.com/Lightning-AI/metrics/releases)
- [Changelog](https://github.com/Lightning-AI/metrics/blob/v0.10.2/CHANGELOG.md)
- [Commits](Lightning-AI/torchmetrics@v0.9.2...v0.10.2)

---
updated-dependencies:
- dependency-name: torchmetrics
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Oct 31, 2022
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Nov 16, 2022

Superseded by #15.

@dependabot dependabot bot closed this Nov 16, 2022
@dependabot dependabot bot deleted the dependabot/pip/torchmetrics-0.10.2 branch November 16, 2022 23:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant