zae_engine.metrics package¶
Submodules¶
zae_engine.metrics.bijective module¶
- class zae_engine.metrics.bijective.BijectiveMetrix(prediction: ndarray | Tensor, label: ndarray | Tensor, num_classes: int, th_run_length: int = 2)[소스]¶
기반 클래스:
object
Compute bijective confusion matrix of given sequences.
The surjective operation projects every run in prediction onto label and checks IoU. The injective projection is the opposite.
- 매개변수:
prediction (Union[np.ndarray, torch.Tensor]) – Sequence of predicted labels.
label (Union[np.ndarray, torch.Tensor]) – Sequence of true labels.
num_classes (int) – The number of classes.
th_run_length (int, optional) – The minimum length of runs to be considered. Default is 2.
- bijective_run_pair¶
The bijective pairs of runs between prediction and label.
- Type:
torch.Tensor
- injective_run_pair¶
The injective mapping from prediction to label.
- Type:
torch.Tensor
- surjective_run_pair¶
The surjective mapping from label to prediction.
- Type:
torch.Tensor
- injective_mat¶
The confusion matrix from injective mapping.
- Type:
np.ndarray
- surjective_mat¶
The confusion matrix from surjective mapping.
- Type:
np.ndarray
- bijective_mat¶
The confusion matrix from bijective mapping.
- Type:
np.ndarray
- bijective_f1¶
The F1 score from bijective mapping.
- Type:
float
- injective_f1¶
The F1 score from injective mapping.
- Type:
float
- surjective_f1¶
The F1 score from surjective mapping.
- Type:
float
- bijective_acc¶
The accuracy from bijective mapping.
- Type:
float
- injective_acc¶
The accuracy from injective mapping.
- Type:
float
- surjective_acc¶
The accuracy from surjective mapping.
- Type:
float
- bijective_confusion() ndarray [소스]¶
Compute confusion matrix using bijective mapping.
- 반환:
The confusion matrix.
- 반환 형식:
np.ndarray
- map_and_confusion(x_runs: List[Run], y_array: ndarray) ndarray [소스]¶
Map runs to label array and compute confusion matrix.
- 매개변수:
x_runs (List[Run]) – List of runs to map.
y_array (np.ndarray) – Label array to map onto.
- 반환:
The confusion matrix.
- 반환 형식:
np.ndarray
zae_engine.metrics.confusion module¶
- zae_engine.metrics.confusion.confusion_matrix(y_hat: ndarray | Tensor, y_true: ndarray | Tensor, num_classes: int) Tensor [소스]¶
Compute the confusion matrix for classification predictions.
This function calculates the confusion matrix, comparing the predicted labels (y_hat) with the true labels (y_true).
- 매개변수:
y_hat (Union[np.ndarray, torch.Tensor]) – The predicted labels, either as a numpy array or a torch tensor.
y_true (Union[np.ndarray, torch.Tensor]) – The true labels, either as a numpy array or a torch tensor.
num_classes (int) – The number of classes in the classification task.
- 반환:
The confusion matrix as a 2-D tensor of shape (num_classes, num_classes).
- 반환 형식:
torch.Tensor
예제
>>> y_true = np.array([0, 1, 2, 2, 1]) >>> y_hat = np.array([0, 2, 2, 2, 0]) >>> confusion_matrix(y_hat, y_true, 3) tensor([[1., 0., 0.], [1., 0., 0.], [0., 1., 2.]]) >>> y_true = torch.tensor([0, 1, 2, 2, 1]) >>> y_hat = torch.tensor([0, 2, 2, 2, 0]) >>> confusion_matrix(y_hat, y_true, 3) tensor([[1., 0., 0.], [1., 0., 0.], [0., 1., 2.]])
- zae_engine.metrics.confusion.print_confusion_matrix(conf_mat: ndarray | Tensor, cell_width: int | None = 4, class_name: List[str] | Tuple[str] = None, frame: bool | None = True)[소스]¶
Print the confusion matrix in a formatted table.
This function prints the given confusion matrix using the rich library for better visualization. The cell width and class names are customizable.
- 매개변수:
conf_mat (Union[np.ndarray, torch.Tensor]) – The confusion matrix, either as a numpy array or a torch tensor.
cell_width (Optional[int], default=4) – The width of each cell in the printed table.
class_name (Union[List[str], Tuple[str]], optional) – The names of the classes. If provided, must match the number of classes in the confusion matrix.
frame (Optional[bool], default=True) – Whether to include a frame around the table.
- 반환 형식:
None
예제
>>> conf_mat = np.array([[5, 2], [1, 3]]) >>> print_confusion_matrix(conf_mat, class_name=['Class 0', 'Class 1']) >>> conf_mat = torch.tensor([[5, 2], [1, 3]]) >>> print_confusion_matrix(conf_mat, class_name=['Class 0', 'Class 1'])
zae_engine.metrics.count module¶
- class zae_engine.metrics.count.Acc[소스]¶
기반 클래스:
object
Accuracy calculation class.
This class calculates either top-1 or top-k accuracy based on the input shapes.
- __call__(true, predict):
Computes the accuracy based on the input shapes.
- static accuracy(true: ndarray | Tensor, predict: ndarray | Tensor) float [소스]¶
Compute the top-1 accuracy of predictions.
This function compares the true labels with the predicted labels and calculates the top-1 accuracy.
- 매개변수:
true (Union[np.ndarray, torch.Tensor]) – The true labels, either as a numpy array or a torch tensor. Shape should be [-1, dim].
predict (Union[np.ndarray, torch.Tensor]) – The predicted labels, either as a numpy array or a torch tensor. Shape should be [-1, dim].
- 반환:
The top-1 accuracy of the predictions as a torch tensor.
- 반환 형식:
torch.Tensor
- static top_k_accuracy(true: ndarray | Tensor, predict: ndarray | Tensor, k: int) float [소스]¶
Compute the top-k accuracy of predictions.
This function compares the true labels with the top-k predicted labels and calculates the top-k accuracy.
- 매개변수:
true (Union[np.ndarray, torch.Tensor]) – The true labels, either as a numpy array or a torch tensor.
predict (Union[np.ndarray, torch.Tensor]) – The predicted labels, either as a numpy array or a torch tensor.
k (int) – The number of top predictions to consider for calculating the accuracy.
- 반환:
The top-k accuracy of the predictions as a torch tensor.
- 반환 형식:
torch.Tensor
- zae_engine.metrics.count.accuracy(true: ndarray | Tensor, predict: ndarray | Tensor)[소스]¶
Compute the accuracy of predictions.
This function compares the true labels with the predicted labels and calculates the accuracy.
- 매개변수:
true (Union[np.ndarray, torch.Tensor]) – The true labels, either as a numpy array or a torch tensor. Shape should be [-1, dim].
predict (Union[np.ndarray, torch.Tensor]) – The predicted labels, either as a numpy array or a torch tensor. Shape should be [-1, dim].
- 반환:
The accuracy of the predictions as a torch tensor.
- 반환 형식:
torch.Tensor
예제
>>> true = np.array([1, 2, 3, 4]) >>> predict = np.array([1, 2, 2, 4]) >>> accuracy(true, predict) tensor(0.7500) >>> true = torch.tensor([1, 2, 3, 4]) >>> predict = torch.tensor([1, 2, 2, 4]) >>> accuracy(true, predict) tensor(0.7500)
- zae_engine.metrics.count.f_beta(pred: ndarray | Tensor, true: ndarray | Tensor, beta: float, num_classes: int, average: str = 'micro')[소스]¶
Compute the F-beta score.
This function calculates the F-beta score for the given predictions and true labels, supporting both micro and macro averaging.
- 매개변수:
pred (Union[np.ndarray, torch.Tensor]) – The predicted labels, either as a numpy array or a torch tensor.
true (Union[np.ndarray, torch.Tensor]) – The true labels, either as a numpy array or a torch tensor.
beta (float) – The beta value for the F-beta score calculation.
num_classes (int) – The number of classes in the classification task.
average (str) – The averaging method for the F-beta score calculation. Either ‘micro’ or ‘macro’.
- 반환:
The F-beta score as a torch tensor.
- 반환 형식:
torch.Tensor
예제
>>> pred = np.array([1, 2, 3, 4]) >>> true = np.array([1, 2, 2, 4]) >>> f_beta(pred, true, beta=1.0, num_classes=5, average='micro') tensor(0.8000) >>> pred = torch.tensor([1, 2, 3, 4]) >>> true = torch.tensor([1, 2, 2, 4]) >>> f_beta(pred, true, beta=1.0, num_classes=5, average='micro') tensor(0.8000)
- zae_engine.metrics.count.f_beta_from_mat(conf_mat: ndarray | Tensor, beta: float, num_classes: int, average: str = 'micro')[소스]¶
Compute the F-beta score from a given confusion matrix.
This function calculates the F-beta score using the provided confusion matrix, supporting both micro and macro averaging.
- 매개변수:
conf_mat (Union[np.ndarray, torch.Tensor]) – The confusion matrix, either as a numpy array or a torch tensor.
beta (float) – The beta value for the F-beta score calculation.
num_classes (int) – The number of classes in the classification task.
average (str) – The averaging method for the F-beta score calculation. Either ‘micro’ or ‘macro’.
- 반환:
The F-beta score as a torch tensor.
- 반환 형식:
torch.Tensor
예제
>>> conf_mat = np.array([[5, 2], [1, 3]]) >>> f_beta_from_mat(conf_mat, beta=1.0, num_classes=2, average='micro') tensor(0.7273) >>> conf_mat = torch.tensor([[5, 2], [1, 3]]) >>> f_beta_from_mat(conf_mat, beta=1.0, num_classes=2, average='micro') tensor(0.7273)
zae_engine.metrics.iou module¶
- zae_engine.metrics.iou.giou(img1: ndarray | Tensor, img2: ndarray | Tensor, iou: bool = False) Tuple[Tensor, Tensor] | Tensor [소스]¶
Compute the Generalized Intersection over Union (GIoU) and optionally the Intersection over Union (IoU) for given images.
This function calculates the GIoU and optionally the IoU for bounding boxes in object detection tasks.
- 매개변수:
img1 (Union[np.ndarray, torch.Tensor]) – The first input image, either as a numpy array or a torch tensor. Shape should be [-1, 2], representing on-off pairs.
img2 (Union[np.ndarray, torch.Tensor]) – The second input image, either as a numpy array or a torch tensor. Shape should be [-1, 2], representing on-off pairs.
iou (bool, optional) – If True, return both IoU and GIoU. Default is False.
- 반환:
The GIoU values, and optionally the IoU values, as torch tensors. Shape is [-1].
- 반환 형식:
Union[Tuple[torch.Tensor, torch.Tensor], torch.Tensor]
예제
>>> img1 = np.array([[1, 4], [2, 5]]) >>> img2 = np.array([[1, 3], [2, 6]]) >>> giou(img1, img2) tensor([-0.6667, -0.5000]) >>> giou(img1, img2, iou=True) (tensor([-0.6667, -0.5000]), tensor([0.5000, 0.3333])) >>> img1 = torch.tensor([[1, 4], [2, 5]]) >>> img2 = torch.tensor([[1, 3], [2, 6]]) >>> giou(img1, img2) tensor([-0.6667, -0.5000]) >>> giou(img1, img2, iou=True) (tensor([-0.6667, -0.5000]), tensor([0.5000, 0.3333]))
- zae_engine.metrics.iou.miou(img1: ndarray | Tensor, img2: ndarray | Tensor) Tensor [소스]¶
Compute the mean Intersection over Union (mIoU) for each value in the given images.
This function calculates the mIoU for 1-dimensional arrays or tensors. Future updates will extend support for higher-dimensional arrays or tensors.
- 매개변수:
img1 (Union[np.ndarray, torch.Tensor]) – The first input image, either as a numpy array or a torch tensor. Shape should be [-1, dim].
img2 (Union[np.ndarray, torch.Tensor]) – The second input image, either as a numpy array or a torch tensor. Shape should be [-1, dim].
- 반환:
The mIoU values for each class, as a torch tensor. Shape is [-1].
- 반환 형식:
torch.Tensor
예제
>>> img1 = np.array([0, 1, 1, 2, 2]) >>> img2 = np.array([0, 1, 1, 2, 1]) >>> miou(img1, img2) tensor([1.0000, 0.6667, 0.0000]) >>> img1 = torch.tensor([0, 1, 1, 2, 2]) >>> img2 = torch.tensor([0, 1, 1, 2, 1]) >>> miou(img1, img2) tensor([1.0000, 0.6667, 0.0000])
zae_engine.metrics.signals module¶
- zae_engine.metrics.signals.mse(signal1: ndarray | Tensor, signal2: ndarray | Tensor) float [소스]¶
Compute the mean squared error (MSE) between two signals.
This function calculates the MSE between two input signals, which is a measure of the average squared difference between the signals.
- 매개변수:
signal1 (Union[np.ndarray, torch.Tensor]) – The first input signal, either as a numpy array or a torch tensor.
signal2 (Union[np.ndarray, torch.Tensor]) – The second input signal, either as a numpy array or a torch tensor.
- 반환:
The MSE value of the signal.
- 반환 형식:
float
예제
>>> signal1 = np.array([1, 2, 3, 4, 5]) >>> signal2 = np.array([1, 2, 3, 4, 6]) >>> mse(signal1, signal2) 0.20000000298023224 >>> signal1 = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float) >>> signal2 = torch.tensor([1, 2, 3, 4, 6], dtype=torch.float) >>> mse(signal1, signal2) 0.20000000298023224
- zae_engine.metrics.signals.peak_signal_to_noise(signal: Tensor | ndarray, noise: Tensor | ndarray, peak: bool | int | float = False) float [소스]¶
Compute the peak signal-to-noise ratio (PSNR).
This function calculates the PSNR, which is a measure of the ratio of the peak signal power to the power of background noise. If peak is not given, the maximum value in the signal is used as the peak value.
- 매개변수:
signal (Union[torch.Tensor, np.ndarray]) – The input signal, either as a torch tensor or a numpy array.
noise (Union[torch.Tensor, np.ndarray]) – The noise signal, either as a torch tensor or a numpy array.
peak (Union[bool, int, float], optional) – The peak value to be used in the calculation. If not provided, the maximum value in the signal is used.
- 반환:
The PSNR value in decibels (dB).
- 반환 형식:
float
예제
>>> signal = np.array([1, 2, 3, 4, 5]) >>> noise = np.array([0.1, 0.2, 0.3, 0.4, 0.5]) >>> peak_signal_to_noise(signal, noise) 24.0824 >>> signal = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float) >>> noise = torch.tensor([0.1, 0.2, 0.3, 0.4, 0.5], dtype=torch.float) >>> peak_signal_to_noise(signal, noise) 24.0824 >>> peak_signal_to_noise(signal, noise, peak=5) 24.0824
- zae_engine.metrics.signals.qilv(signal1: Tensor | ndarray, signal2: Tensor | ndarray, window: Tensor | ndarray) float [소스]¶
Calculate the Quality Index based on Local Variance (QILV) for two signals.
This function computes a global compatibility metric between two signals based on their local variance distribution.
- 매개변수:
signal1 (Union[torch.Tensor, np.ndarray]) – The first input signal.
signal2 (Union[torch.Tensor, np.ndarray]) – The second input signal.
window (Union[torch.Tensor, np.ndarray]) – The window used to define the local region for variance calculation.
- 반환:
The QILV value, bounded between [0, 1].
- 반환 형식:
float
참조
예제
>>> signal1 = np.array([1, 2, 3, 4, 5]) >>> signal2 = np.array([1, 2, 3, 4, 6]) >>> window = np.ones(3) >>> qilv(signal1, signal2, window) 0.9948761003700519
- zae_engine.metrics.signals.rms(signal: ndarray | Tensor) float [소스]¶
Compute the root mean square (RMS) of a signal.
This function calculates the RMS value of the input signal, which is a measure of the magnitude of the signal.
- 매개변수:
signal (Union[np.ndarray, torch.Tensor]) – The input signal, either as a numpy array or a torch tensor.
- 반환:
The RMS value of the signal.
- 반환 형식:
float
예제
>>> signal = np.array([1, 2, 3, 4, 5]) >>> rms(signal) 3.3166247903554 >>> signal = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float) >>> rms(signal) 3.3166247901916504
- zae_engine.metrics.signals.signal_to_noise(signal: Tensor | ndarray, noise: Tensor | ndarray) float [소스]¶
Compute the signal-to-noise ratio (SNR).
This function calculates the SNR, which is a measure of the ratio of the power of a signal to the power of background noise.
- 매개변수:
signal (Union[torch.Tensor, np.ndarray]) – The input signal, either as a torch tensor or a numpy array.
noise (Union[torch.Tensor, np.ndarray]) – The noise signal, either as a torch tensor or a numpy array.
- 반환:
The SNR value in decibels (dB).
- 반환 형식:
float
예제
>>> signal = np.array([1, 2, 3, 4, 5]) >>> noise = np.array([0.1, 0.2, 0.3, 0.4, 0.5]) >>> signal_to_noise(signal, noise) 20.0 >>> signal = torch.tensor([1, 2, 3, 4, 5], dtype=torch.float) >>> noise = torch.tensor([0.1, 0.2, 0.3, 0.4, 0.5], dtype=torch.float) >>> signal_to_noise(signal, noise) 20.0