Learn what are the two common metrics for evaluating a regression model: coefficient of determination (R2) and root mean squared error (RMSE). Find out how they are calculated and what they mean.
Table of Contents
Question
What are two metrics that you can use to evaluate a regression model? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. coefficient of determination (R2)
B. F1 score
C. root mean squared error (RMSE)
D. area under curve (AUC)
E. balanced accuracy
Answer
A. coefficient of determination (R2)
C. root mean squared error (RMSE)
Explanation
A: R-squared (R2), or Coefficient of determination represents the predictive power of the model as a value between -inf and 1.00. 1.00 means there is a perfect fit, and the fit can be arbitrarily poor so the scores can be negative.
C: RMS-loss or Root Mean Squared Error (RMSE) (also called Root Mean Square Deviation, RMSD), measures the difference between values predicted by a model and the values observed from the environment that is being modeled.
Incorrect Answers:
B: F1 score also known as balanced F-score or F-measure is used to evaluate a classification model.
D: aucROC or area under the curve (AUC) is used to evaluate a classification model.
The correct answer is A and C.
A regression model is a type of predictive model that estimates a continuous output variable from one or more input variables. To evaluate how well a regression model fits the data and generalizes to new data, we need to use error metrics that measure the difference between the actual and predicted values.
Two common error metrics for regression models are:
- Coefficient of determination (R2): This metric measures how much of the variation in the output variable is explained by the input variables. It ranges from 0 to 1, with higher values indicating better fit and generalization. R2 is calculated as the ratio of the explained variance to the total variance.
- Root mean squared error (RMSE): This metric measures the average magnitude of the error between the actual and predicted values. It is calculated as the square root of the mean squared error (MSE), which is the average of the squared differences between the actual and predicted values. RMSE is useful for comparing models with different scales of output variable, as it is in the same units as the output variable.
The other options are not suitable for evaluating regression models:
- F1 score: This metric is used for evaluating classification models, not regression models. It is the harmonic mean of precision and recall, which are measures of how well the model can identify the correct class labels.
- Area under curve (AUC): This metric is also used for evaluating classification models, not regression models. It is the area under the receiver operating characteristic (ROC) curve, which plots the true positive rate against the false positive rate for different threshold values. AUC measures how well the model can distinguish between positive and negative classes.
- Balanced accuracy: This metric is another one used for evaluating classification models, not regression models. It is the average of the recall scores for each class, adjusted for class imbalance. It measures how well the model can correctly classify each class.
Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Fundamentals AI-900 exam and earn Microsoft Azure AI Fundamentals AI-900 certification.