The LearningRateScheduler
callback in XGBoost allows you to dynamically adjust the learning rate during model training.
This can be useful for implementing custom learning rate schedules that adapt to the model’s performance.
In this example, we’ll demonstrate how to use the LearningRateScheduler
callback with a custom learning rate schedule function.
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.metrics import root_mean_squared_error
import xgboost as xgb
import numpy as np
# Load example dataset
data = fetch_california_housing()
X, y = data.data, data.target
# Split data into train and validation sets
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
# Define custom learning rate schedule
def custom_learning_rate(current_iter):
base_learning_rate = 0.1
lr = base_learning_rate * np.power(0.995, current_iter)
return lr
# Create DMatrix objects
dtrain = xgb.DMatrix(X_train, label=y_train)
dval = xgb.DMatrix(X_val, label=y_val)
# Define XGBoost parameters
params = {
'objective': 'reg:squarederror',
'max_depth': 3,
'learning_rate': 0.1,
'subsample': 0.8,
'colsample_bytree': 0.8,
}
# Create LearningRateScheduler callback
lr_scheduler = xgb.callback.LearningRateScheduler(custom_learning_rate)
# Train model with learning rate scheduler
model = xgb.train(
params,
dtrain,
num_boost_round=100,
evals=[(dtrain, "train"), (dval, "validation")],
callbacks=[lr_scheduler]
)
# Make predictions and evaluate performance
y_pred = model.predict(dval)
rmse = root_mean_squared_error(y_val, y_pred)
print(f"Final RMSE: {rmse:.3f}")
In this example, we first load the California Housing dataset and split it into train and validation sets. We then define a custom learning rate schedule function custom_learning_rate
that takes the current iteration as input and returns the learning rate. This function implements a simple exponential decay schedule, where the learning rate decreases by a factor of 0.995 at each iteration.
Next, we create DMatrix
objects for the train and validation data and set the XGBoost parameters, including the initial learning rate. We create a LearningRateScheduler
callback with our custom schedule function and pass it to the xgb.train
function along with the other parameters.
The model is trained for 100 boosting iterations, and the learning rate is adjusted at each iteration according to our custom schedule. Finally, we make predictions on the validation set and evaluate the model’s performance using RMSE.
By using the LearningRateScheduler
callback, we can easily implement custom learning rate schedules that adapt to the model’s performance during training. This can help improve the model’s convergence and generalization capabilities.