Skip to content

Metrics

Bases: BaseMetric

A class used to compute the Mean Average Precision (mAP) metric.

mAP is a popular metric for object detection tasks, measuring the average precision across all classes and IoU thresholds.

Source code in maestro/trainer/common/utils/metrics.py
class MeanAveragePrecisionMetric(BaseMetric):
    """A class used to compute the Mean Average Precision (mAP) metric.

    mAP is a popular metric for object detection tasks, measuring the average precision
    across all classes and IoU thresholds.
    """

    name = "mean_average_precision"

    def describe(self) -> list[str]:
        """Returns a list of metric names that this class will compute.

        Returns:
            List[str]: A list of metric names.
        """
        return ["map50:95", "map50", "map75"]

    def compute(self, targets: list[sv.Detections], predictions: list[sv.Detections]) -> dict[str, float]:
        """Computes the mAP metrics based on the targets and predictions.

        Args:
            targets (List[sv.Detections]): The ground truth detections.
            predictions (List[sv.Detections]): The predicted detections.

        Returns:
            Dict[str, float]: A dictionary of computed mAP metrics with metric names as
                keys and their values.
        """
        result = MeanAveragePrecision().update(targets=targets, predictions=predictions).compute()
        return {"map50:95": result.map50_95, "map50": result.map50, "map75": result.map75}

Functions

compute(targets, predictions)

Computes the mAP metrics based on the targets and predictions.

Parameters:

Name Type Description Default

targets

List[Detections]

The ground truth detections.

required

predictions

List[Detections]

The predicted detections.

required

Returns:

Type Description
dict[str, float]

Dict[str, float]: A dictionary of computed mAP metrics with metric names as keys and their values.

Source code in maestro/trainer/common/utils/metrics.py
def compute(self, targets: list[sv.Detections], predictions: list[sv.Detections]) -> dict[str, float]:
    """Computes the mAP metrics based on the targets and predictions.

    Args:
        targets (List[sv.Detections]): The ground truth detections.
        predictions (List[sv.Detections]): The predicted detections.

    Returns:
        Dict[str, float]: A dictionary of computed mAP metrics with metric names as
            keys and their values.
    """
    result = MeanAveragePrecision().update(targets=targets, predictions=predictions).compute()
    return {"map50:95": result.map50_95, "map50": result.map50, "map75": result.map75}

describe()

Returns a list of metric names that this class will compute.

Returns:

Type Description
list[str]

List[str]: A list of metric names.

Source code in maestro/trainer/common/utils/metrics.py
def describe(self) -> list[str]:
    """Returns a list of metric names that this class will compute.

    Returns:
        List[str]: A list of metric names.
    """
    return ["map50:95", "map50", "map75"]

Bases: BaseMetric

A class used to compute the Word Error Rate (WER) metric.

WER measures the edit distance between predicted and reference transcriptions at the word level, commonly used in speech recognition and machine translation.

Source code in maestro/trainer/common/utils/metrics.py
class WordErrorRateMetric(BaseMetric):
    """A class used to compute the Word Error Rate (WER) metric.

    WER measures the edit distance between predicted and reference transcriptions
    at the word level, commonly used in speech recognition and machine translation.
    """

    name = "word_error_rate"

    def describe(self) -> list[str]:
        """Returns a list of metric names that this class will compute.

        Returns:
            List[str]: A list of metric names.
        """
        return ["wer"]

    def compute(self, targets: list[str], predictions: list[str]) -> dict[str, float]:
        """Computes the WER metric based on the targets and predictions.

        Args:
            targets (List[str]): The ground truth texts.
            predictions (List[str]): The predicted texts.

        Returns:
            Dict[str, float]: A dictionary of computed WER metrics with metric names as
                keys and their values.
        """
        if len(targets) != len(predictions):
            raise ValueError("The number of targets and predictions must be the same.")

        total_wer = 0.0
        count = len(targets)

        for target, prediction in zip(targets, predictions):
            total_wer += wer(target, prediction)

        average_wer = total_wer / count if count > 0 else 0.0
        return {"wer": average_wer}

Functions

compute(targets, predictions)

Computes the WER metric based on the targets and predictions.

Parameters:

Name Type Description Default

targets

List[str]

The ground truth texts.

required

predictions

List[str]

The predicted texts.

required

Returns:

Type Description
dict[str, float]

Dict[str, float]: A dictionary of computed WER metrics with metric names as keys and their values.

Source code in maestro/trainer/common/utils/metrics.py
def compute(self, targets: list[str], predictions: list[str]) -> dict[str, float]:
    """Computes the WER metric based on the targets and predictions.

    Args:
        targets (List[str]): The ground truth texts.
        predictions (List[str]): The predicted texts.

    Returns:
        Dict[str, float]: A dictionary of computed WER metrics with metric names as
            keys and their values.
    """
    if len(targets) != len(predictions):
        raise ValueError("The number of targets and predictions must be the same.")

    total_wer = 0.0
    count = len(targets)

    for target, prediction in zip(targets, predictions):
        total_wer += wer(target, prediction)

    average_wer = total_wer / count if count > 0 else 0.0
    return {"wer": average_wer}

describe()

Returns a list of metric names that this class will compute.

Returns:

Type Description
list[str]

List[str]: A list of metric names.

Source code in maestro/trainer/common/utils/metrics.py
def describe(self) -> list[str]:
    """Returns a list of metric names that this class will compute.

    Returns:
        List[str]: A list of metric names.
    """
    return ["wer"]

Bases: BaseMetric

A class used to compute the Character Error Rate (CER) metric.

CER is similar to WER but operates at the character level, making it useful for tasks like optical character recognition (OCR) and handwriting recognition.

Source code in maestro/trainer/common/utils/metrics.py
class CharacterErrorRateMetric(BaseMetric):
    """A class used to compute the Character Error Rate (CER) metric.

    CER is similar to WER but operates at the character level, making it useful for
    tasks like optical character recognition (OCR) and handwriting recognition.
    """

    name = "character_error_rate"

    def describe(self) -> list[str]:
        """Returns a list of metric names that this class will compute.

        Returns:
            List[str]: A list of metric names.
        """
        return ["cer"]

    def compute(self, targets: list[str], predictions: list[str]) -> dict[str, float]:
        """Computes the CER metric based on the targets and predictions.

        Args:
            targets (List[str]): The ground truth texts.
            predictions (List[str]): The predicted texts.

        Returns:
            Dict[str, float]: A dictionary of computed CER metrics with metric names as
                keys and their values.
        """
        if len(targets) != len(predictions):
            raise ValueError("The number of targets and predictions must be the same.")

        total_cer = 0.0
        count = len(targets)

        for target, prediction in zip(targets, predictions):
            total_cer += cer(target, prediction)

        average_cer = total_cer / count if count > 0 else 0.0
        return {"cer": average_cer}

Functions

compute(targets, predictions)

Computes the CER metric based on the targets and predictions.

Parameters:

Name Type Description Default

targets

List[str]

The ground truth texts.

required

predictions

List[str]

The predicted texts.

required

Returns:

Type Description
dict[str, float]

Dict[str, float]: A dictionary of computed CER metrics with metric names as keys and their values.

Source code in maestro/trainer/common/utils/metrics.py
def compute(self, targets: list[str], predictions: list[str]) -> dict[str, float]:
    """Computes the CER metric based on the targets and predictions.

    Args:
        targets (List[str]): The ground truth texts.
        predictions (List[str]): The predicted texts.

    Returns:
        Dict[str, float]: A dictionary of computed CER metrics with metric names as
            keys and their values.
    """
    if len(targets) != len(predictions):
        raise ValueError("The number of targets and predictions must be the same.")

    total_cer = 0.0
    count = len(targets)

    for target, prediction in zip(targets, predictions):
        total_cer += cer(target, prediction)

    average_cer = total_cer / count if count > 0 else 0.0
    return {"cer": average_cer}

describe()

Returns a list of metric names that this class will compute.

Returns:

Type Description
list[str]

List[str]: A list of metric names.

Source code in maestro/trainer/common/utils/metrics.py
def describe(self) -> list[str]:
    """Returns a list of metric names that this class will compute.

    Returns:
        List[str]: A list of metric names.
    """
    return ["cer"]