Metrics
Bases: BaseMetric
A class used to compute the Mean Average Precision (mAP) metric.
mAP is a popular metric for object detection tasks, measuring the average precision across all classes and IoU thresholds.
Source code in maestro/trainer/common/utils/metrics.py
Functions¶
compute(targets, predictions)
¶
Computes the mAP metrics based on the targets and predictions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
List[Detections]
|
The ground truth detections. |
required |
|
List[Detections]
|
The predicted detections. |
required |
Returns:
Type | Description |
---|---|
dict[str, float]
|
Dict[str, float]: A dictionary of computed mAP metrics with metric names as keys and their values. |
Source code in maestro/trainer/common/utils/metrics.py
describe()
¶
Returns a list of metric names that this class will compute.
Returns:
Type | Description |
---|---|
list[str]
|
List[str]: A list of metric names. |
Bases: BaseMetric
A class used to compute the Word Error Rate (WER) metric.
WER measures the edit distance between predicted and reference transcriptions at the word level, commonly used in speech recognition and machine translation.
Source code in maestro/trainer/common/utils/metrics.py
Functions¶
compute(targets, predictions)
¶
Computes the WER metric based on the targets and predictions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
List[str]
|
The ground truth texts. |
required |
|
List[str]
|
The predicted texts. |
required |
Returns:
Type | Description |
---|---|
dict[str, float]
|
Dict[str, float]: A dictionary of computed WER metrics with metric names as keys and their values. |
Source code in maestro/trainer/common/utils/metrics.py
describe()
¶
Returns a list of metric names that this class will compute.
Returns:
Type | Description |
---|---|
list[str]
|
List[str]: A list of metric names. |
Bases: BaseMetric
A class used to compute the Character Error Rate (CER) metric.
CER is similar to WER but operates at the character level, making it useful for tasks like optical character recognition (OCR) and handwriting recognition.
Source code in maestro/trainer/common/utils/metrics.py
Functions¶
compute(targets, predictions)
¶
Computes the CER metric based on the targets and predictions.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
List[str]
|
The ground truth texts. |
required |
|
List[str]
|
The predicted texts. |
required |
Returns:
Type | Description |
---|---|
dict[str, float]
|
Dict[str, float]: A dictionary of computed CER metrics with metric names as keys and their values. |
Source code in maestro/trainer/common/utils/metrics.py
describe()
¶
Returns a list of metric names that this class will compute.
Returns:
Type | Description |
---|---|
list[str]
|
List[str]: A list of metric names. |