XGBoosting Home | About | Contact | Examples

XGBoost xgboost.train() vs XGBRegressor

XGBoost is a powerful library for gradient boosting, offering two main approaches to train regression models: xgboost.train and XGBRegressor.

While both methods can be used to train XGBoost models, they differ in their API design and level of control. This example demonstrates the key differences between these two approaches and provides code examples for each.

Let’s start by training an XGBoost regression model using xgboost.train:

import xgboost as xgb
from sklearn.metrics import mean_squared_error
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split

X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define parameters for xgboost.train
params = {
    'objective': 'reg:squarederror',
    'max_depth': 3,
    'learning_rate': 0.1,
    'random_state': 42
}

# Create DMatrix objects for train and test data
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)

# Train the model using xgboost.train
model = xgb.train(params, dtrain, num_boost_round=100)

# Make predictions and evaluate model performance
y_pred = model.predict(dtest)
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error (xgboost.train): {mse:.4f}")

In this approach, we define the model parameters in a dictionary and create DMatrix objects for the train and test data.

We then use xgboost.train to train the model, specifying the parameters, training data, and number of boosting rounds.

Now, let’s train the same model using XGBRegressor:

from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split

X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Define parameters for XGBRegressor
params = {
    'max_depth': 3,
    'learning_rate': 0.1,
    'n_estimators': 100,
    'random_state': 42
}

# Instantiate XGBRegressor with parameters
model = XGBRegressor(**params)

# Train the model using fit() method
model.fit(X_train, y_train)

# Make predictions and evaluate model performance
y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error (XGBRegressor): {mse:.4f}")

With XGBRegressor, we define the model parameters directly in the constructor. We then instantiate the regressor with these parameters and train the model using the fit() method, which takes the training data as numpy arrays or pandas DataFrames.

The key differences between xgboost.train and XGBRegressor are:

When deciding which approach to use, consider your specific needs:

By understanding the differences between xgboost.train and XGBRegressor, you can choose the most suitable approach for your XGBoost regression model training tasks.



See Also