XGBoosting Home | About | Contact | Examples

XGBoost Configure "max_delta_step" Parameter for Imbalanced Classification

When working with imbalanced classification tasks in XGBoost, where the number of instances in each class differs significantly, the model may overfit on the majority class.

The max_delta_step parameter can help mitigate this issue by limiting the maximum change in the predictions between iterations, effectively preventing the model from giving too much importance to the majority class.

This example demonstrates how to use the max_delta_step parameter and evaluates its impact on model performance using a synthetic imbalanced dataset.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.metrics import confusion_matrix, classification_report
import numpy as np

# Generate an imbalanced synthetic dataset
X, y = make_classification(n_samples=1000, n_classes=2, weights=[0.9, 0.1], random_state=42)

# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize XGBClassifier with default settings
model_default = XGBClassifier(n_estimators=100, random_state=42)

# Initialize XGBClassifier with max_delta_step set to a non-default value
model_mds = XGBClassifier(n_estimators=100, max_delta_step=1, random_state=42)

# Fit the models
model_default.fit(X_train, y_train)
model_mds.fit(X_train, y_train)

# Generate predictions
pred_default = model_default.predict(X_test)
pred_mds = model_mds.predict(X_test)

# Evaluate the models
print("Model with default settings:")
print("Confusion Matrix:")
print(confusion_matrix(y_test, pred_default))
print("\nClassification Report:")
print(classification_report(y_test, pred_default))

print("\nModel with max_delta_step set to 1:")
print("Confusion Matrix:")
print(confusion_matrix(y_test, pred_mds))
print("\nClassification Report:")
print(classification_report(y_test, pred_mds))

In this example, we generate a synthetic imbalanced dataset using make_classification from scikit-learn, with 90% of the instances belonging to class 0 and 10% belonging to class 1. We then initialize two XGBClassifier models: one with default settings and another with max_delta_step set to 1.

After training both models on the same data, we generate predictions and evaluate their performance using the confusion matrix and classification report.

By comparing the results of the two models, we can assess the impact of the max_delta_step parameter on handling class imbalance. A well-tuned max_delta_step value can help prevent the model from overfitting on the majority class and improve its performance on the minority class.

It’s important to note that the optimal value for max_delta_step depends on the specific dataset and problem at hand. The value used in this example (1) is chosen for demonstration purposes only. In practice, it’s recommended to tune this parameter using techniques like cross-validation to find the best value for your specific use case.

By understanding and leveraging the max_delta_step parameter, data scientists and machine learning engineers can effectively tackle imbalanced classification tasks with XGBoost and build more robust models that perform well on both majority and minority classes.



See Also