< All Topics

Overfitting and underfitting

Overfitting and underfitting are common issues in machine learning that arise during the training of a model. They represent two extremes in the model’s ability to generalize to new, unseen data.

  1. Overfitting:
    • Definition: Overfitting occurs when a model learns the training data too well, capturing noise and random fluctuations in the data rather than the underlying patterns. As a result, the model performs poorly on new, unseen data.
    • Characteristics:
      • High training accuracy but low test accuracy.
      • The model fits the training data too closely, capturing noise and outliers.
      • Complex models with too many parameters are prone to overfitting.
    • Causes:
      • Too complex a model for the amount of available training data.
      • Lack of regularization.
      • Training for too many epochs, leading to memorization of training examples.
    • Prevention and Mitigation:
      • Use simpler models with fewer parameters.
      • Introduce regularization techniques (e.g., L1, L2 regularization).
      • Increase the amount of training data.
      • Use dropout or other techniques to prevent memorization.
  2. Underfitting:
    • Definition: Underfitting occurs when a model is too simple and fails to capture the underlying patterns in the training data. As a result, the model performs poorly not only on the training data but also on new, unseen data.
    • Characteristics:
      • Low training accuracy and low test accuracy.
      • The model is too simplistic and cannot capture the complexity of the data.
      • May result from using a linear model for a non-linear problem.
    • Causes:
      • Using a model that is too simple for the complexity of the data.
      • Insufficient training or not allowing the model to learn adequately.
    • Prevention and Mitigation:
      • Use more complex models or increase model capacity.
      • Ensure sufficient training time and allow the model to learn the underlying patterns.
      • Add relevant features or perform feature engineering.
      • Consider using ensemble methods.
  3. Bias-Variance Tradeoff:
    • Overfitting and underfitting are manifestations of the bias-variance tradeoff. A model with high bias (underfitting) is too simplistic and lacks the capacity to capture the underlying patterns. A model with high variance (overfitting) fits the training data too closely, capturing noise and variations that are not representative of the true patterns.
    • Achieving an optimal balance between bias and variance is crucial for building models that generalize well to new, unseen data.
  4. Validation and Test Sets:
    • Splitting the dataset into training, validation, and test sets is essential for detecting and addressing overfitting and underfitting. The training set is used for model training, the validation set helps tune hyperparameters and prevent overfitting, and the test set assesses the model’s performance on unseen data.

Regular monitoring of the training and validation/test performance during model training is key to detecting signs of overfitting or underfitting. Adjustments to model complexity, regularization, and training duration can help strike a balance that ensures good generalization to new data.

Table of Contents