Skip to Content

AI-900: Leveraging Random Splitting for Effective Model Training and Evaluation

Discover the importance of randomly splitting rows into separate subsets during model training. Understand how this technique contributes to accurate model performance and reliable evaluation by utilizing data that wasn’t used in the training process.

Question

When training a model, why should you randomly split the rows into separate subsets?

A. to train the model twice to attain better accuracy
B. to train multiple models simultaneously to attain better performance
C. to test the model by using data that was not used to train the model

Answer

C. to test the model by using data that was not used to train the model

Explanation

The correct answer is C. to test the model by using data that was not used to train the model.

When training a model, you should randomly split the rows into separate subsets, such as training set, validation set, and test set. The training set is used to fit the model parameters, the validation set is used to tune the model hyperparameters, and the test set is used to evaluate the model performance. By using data that was not used to train the model, you can test the model’s ability to generalize to new and unseen data, and avoid overfitting or underfitting.

The other options are not correct for the following reasons:

  • to train the model twice to attain better accuracy: This is not a valid reason to split the data into separate subsets. Training the model twice on the same data does not improve the accuracy, but may cause overfitting or underfitting. Overfitting occurs when the model learns the noise and errors in the training data, and fails to generalize to new data. Underfitting occurs when the model fails to learn the patterns and relationships in the training data, and performs poorly on both training and new data.
  • to train multiple models simultaneously to attain better performance: This is not a valid reason to split the data into separate subsets. Training multiple models simultaneously on the same data does not improve the performance, but may increase the computational cost and complexity. Performance is measured by how well the model performs on new and unseen data, not on how many models are trained.

The goal is to produce a trained (fitted) model that generalizes well to new, unknown data. The fitted model is evaluated using “new” examples from the held-out datasets (validation and test datasets) to estimate the model’s accuracy in classifying new data.

Reference

Wikipedia > Training, validation, and test data sets

Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump with detail explanation and reference available free, helpful to pass the Microsoft Azure AI Fundamentals AI-900 exam and earn Microsoft Azure AI Fundamentals AI-900 certification.

Microsoft Azure AI Fundamentals AI-900 certification exam practice question and answer (Q&A) dump