Skip to content

feat: add class_names support to Precision and Recall metrics#3732

Open
rogueslasher wants to merge 2 commits intopytorch:masterfrom
rogueslasher:feature/class-names-precision-recall
Open

feat: add class_names support to Precision and Recall metrics#3732
rogueslasher wants to merge 2 commits intopytorch:masterfrom
rogueslasher:feature/class-names-precision-recall

Conversation

@rogueslasher
Copy link
Copy Markdown
Contributor

Fixes #1466

Description:
Adds an optional class_names parameter to _BasePrecisionRecall, allowing compute() to return a labeled dict instead of an unnamed tensor when average=False or average=None. Useful for per-class metric tracking where knowing which score belongs to which class matters for logging and visualization.

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

@github-actions github-actions Bot added the module: metrics Metrics module label Apr 14, 2026
@aaishwarymishra
Copy link
Copy Markdown
Collaborator

ok so like wont this code break fbeta metric when precision and recall have average=False and they have class_names, we need to update the fbeta too to support this.

There can be more metrics that can benefit from this change. I am not sure though.

Comment thread ignite/metrics/precision.py Outdated
@rogueslasher
Copy link
Copy Markdown
Contributor Author

@aaishwarymishra would raising errors for those condition a better option or should we add full class_name support to fbeta

@aaishwarymishra
Copy link
Copy Markdown
Collaborator

I am not sure, adding support for the class names in fbeta would be easier, but fbeta uses the MetricsLambda for overloading operator we can update them, I am not sure @vfdev-5 what you think would be appropriate?

@steaphenai
Copy link
Copy Markdown
Contributor

Could we please clarify Fbeta behavior when class_names is used with average=False/None

@vfdev-5
Copy link
Copy Markdown
Collaborator

vfdev-5 commented Apr 21, 2026

I am not sure, adding support for the class names in fbeta would be easier, but fbeta uses the MetricsLambda for overloading operator we can update them, I am not sure @vfdev-5 what you think would be appropriate?

If we can compute F1 score per class than we should propagate the dict structure until the output of the F1 otherwise, take .values() of the dict output from precision and recall.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module: metrics Metrics module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add ability to add class name for metrics

4 participants