XGBoosting Home | About | Contact | Examples

Configure XGBoost Dropout Regularization (Dart)

Dropout regularization is a technique to prevent overfitting in XGBoost models by randomly dropping a fraction of the nodes during each boosting iteration.

By introducing this randomness, dropout creates an ensemble of submodels and reduces the co-adaptation of nodes.

Configuring dropout in XGBoost involves setting the booster to dart and setting the rate_drop parameter to a non-zero value.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier

# Create a synthetic dataset
X, y = make_classification(n_samples=1000, n_classes=2, n_features=20, random_state=42)

# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define the XGBoost classifier with dropout regularization
xgb_model = XGBClassifier(objective='binary:logistic', booster='dart', rate_drop=0.1, n_estimators=100)

# Train the model
xgb_model.fit(X_train, y_train)

# Predict on the test set
y_pred = xgb_model.predict(X_test)

DART booster

The DART (Dropouts meet Multiple Additive Regression Trees) booster is one of the boosting algorithms available in XGBoost.

It incorporates the concept of dropout, commonly used in deep learning, to address the issue of overfitting in boosted tree models.

Here’s how the DART booster works:

1. Dropout Technique:

Dropout in the context of DART means randomly dropping a fraction of the trees during the training process for each boosting iteration. Unlike in traditional gradient boosting, where each new tree is built on the cumulative predictions of all previously built trees, DART sometimes omits a subset of the existing trees when making predictions and calculating gradients for the training of the current tree.

2. Tree Training:

In each boosting round, after selecting which trees to drop, the residuals (errors) are recalculated as if the dropped trees were not part of the model. A new tree is then trained to fit these adjusted residuals.

3. Tree Dropout:

The probability of dropping any specific tree is a hyperparameter and can be adjusted based on the needs of the model. After training the new tree, the dropped trees are typically brought back into the model, which means that the dropping is only temporary and used during the training of the individual tree.

4. Weight Shrinkage:

To stabilize the learning process, DART employs another mechanism called weight shrinkage. After training a tree and before adding it back to the ensemble (especially after dropout), the output of the new tree is scaled by a factor (usually less than 1). This is akin to learning rate in other forms of gradient boosting.

5. Normalization:

At prediction time, the predictions from all trees are normalized to account for the trees that were dropped during training. This ensures that the prediction is a proper ensemble of all individual trees, reflecting the contribution of each despite the dropout process during training.

Benefits of Dropout With DART:

DART can be particularly useful when you are dealing with complex datasets where traditional gradient boosting tends to overfit. However, it might require more tuning of hyperparameters such as the dropout rate and the number of boosting rounds compared to standard boosting methods.



See Also