Choosing the right evaluation metric is key to properly assessing machine learning models. For classification, true positive rate, precision, recall and F1 score offer insight into performance.

## Question

Table of Contents

Which metric can you use to evaluate a classification model?

A. true positive rate

B. mean absolute error (MAE)

C. coefficient of determination (R2)

D. root mean squared error (RMSE)

## Answer

A. true positive rate

## Explanation

What does a good model look like?

An ROC curve that approaches the top left corner with 100% true positive rate and 0% false positive rate will be the best model. A random model would display as a flat line from the bottom left to the top right corner. Worse than random would dip below the y=x line.

The correct answer is A. You can use the true positive rate (TPR) to evaluate a classification model.

A classification model is a type of machine learning model that predicts a categorical label for a given input. For example, a classification model can predict whether an email is spam or not, or whether a tumor is benign or malignant. To evaluate the performance of a classification model, we need to compare the predicted labels with the actual labels and measure how well they match.

One way to do this is to use a confusion matrix, which is a table that shows the number of correct and incorrect predictions for each class. A confusion matrix for a binary classification problem (where there are only two possible classes) looks like this:

Predicted Positive | Predicted Negative | |
---|---|---|

Actual Positive | True Positive (TP) | False Negative (FN) |

Actual Negative | False Positive (FP) | True Negative (TN) |

The confusion matrix shows four types of outcomes:

- True Positive (TP): The model correctly predicts the positive class.
- True Negative (TN): The model correctly predicts the negative class.
- False Positive (FP): The model incorrectly predicts the positive class.
- False Negative (FN): The model incorrectly predicts the negative class.

Based on the confusion matrix, we can calculate various metrics to evaluate the classification model. One of these metrics is the true positive rate (TPR), which is also known as sensitivity or recall. The TPR measures the proportion of actual positives that are correctly predicted by the model.

The TPR ranges from 0 to 1, where a higher value indicates a better performance. A TPR of 1 means that the model correctly predicts all the positive cases, while a TPR of 0 means that the model misses all the positive cases. The TPR is useful for evaluating how well the model can identify the positive class, especially when the positive class is rare or important (such as detecting cancer or fraud).

The other options are not suitable for evaluating a classification model, because they are metrics for regression models. A regression model is a type of machine learning model that predicts a continuous value for a given input. For example, a regression model can predict the price of a house, or the temperature of a city. To evaluate the performance of a regression model, we need to compare the predicted values with the actual values and measure how close they are.

One way to do this is to use the mean absolute error (MAE), which is the average of the absolute differences between the predicted and actual values. The MAE ranges from 0 to infinity, where a lower value indicates a better performance. A MAE of 0 means that the model perfectly predicts the actual values, while a high MAE means that the model has large errors. The MAE is useful for evaluating how well the model can predict the average value of the target variable, regardless of the direction of the error.

Another way to evaluate a regression model is to use the coefficient of determination (R2), which is also known as the R-squared or the explained variance. The R2 measures the proportion of the variance in the target variable that is explained by the model. The R2 ranges from -infinity to 1, where a higher value indicates a better performance. A R2 of 1 means that the model perfectly explains the variance in the target variable, while a R2 of 0 means that the model is no better than the mean. A negative R2 means that the model is worse than the mean. The R2 is useful for evaluating how well the model can fit the data, and how much variation in the target variable can be attributed to the model.

A third way to evaluate a regression model is to use the root mean squared error (RMSE), which is the square root of the average of the squared differences between the predicted and actual values. The RMSE ranges from 0 to infinity, where a lower value indicates a better performance. A RMSE of 0 means that the model perfectly predicts the actual values, while a high RMSE means that the model has large errors. The RMSE is useful for evaluating how well the model can predict the typical value of the target variable, and how much error the model has. The RMSE is also sensitive to outliers, which are extreme values that deviate from the normal distribution of the data.

## References

Microsoft Docs > Azure > Machine Learning > Evaluate automated machine learning experiment results

Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Fundamentals AI-900 exam and earn Microsoft Azure AI Fundamentals AI-900 certification.