XGBoosting Home | About | Contact | Examples

XGBoost booster.predict() vs XGBRegressor.predict()

When it comes to performing inference with a trained XGBoost regression model, you have two main options: booster.predict() and XGBRegressor.predict().

While both methods allow you to make predictions, they differ in their API design and input data format. This example demonstrates the key differences between these approaches and provides code examples for each.

Let’s start by training an XGBoost model using xgboost.train and making predictions with booster.predict():

import xgboost as xgb
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split

X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

params = {
    'objective': 'reg:squarederror',
    'max_depth': 3,
    'learning_rate': 0.1,
    'random_state': 42
}

dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)

model = xgb.train(params, dtrain, num_boost_round=100)

y_pred = model.predict(dtest)

In this approach, we define the model parameters in a dictionary and create DMatrix objects for the train and test data. We then train the model using xgb.train() and make predictions on the test data using booster.predict(), which returns the predicted values directly.

Now, let’s train the same model using XGBRegressor and make predictions with XGBRegressor.predict():

from xgboost import XGBRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split

X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

params = {
    'max_depth': 3,
    'learning_rate': 0.1,
    'n_estimators': 100,
    'random_state': 42
}

model = XGBRegressor(**params)
model.fit(X_train, y_train)

y_pred = model.predict(X_test)

With XGBRegressor, we define the model parameters directly in the constructor. We then instantiate the regressor with these parameters and train the model using the fit() method, which takes the training data as numpy arrays or pandas DataFrames. To make predictions, we simply call model.predict() on the test data, which returns the predicted values.

The key differences between booster.predict() and XGBRegressor.predict() are:

When deciding which approach to use, consider your specific needs:

By understanding the differences between booster.predict() and XGBRegressor.predict(), you can choose the most suitable approach for your XGBoost model inference tasks in regression problems.



See Also