XGBoosting Home | About | Contact | Examples

Configure XGBoost "reg:linear" Objective

XGBoost is a powerful library for gradient boosting, offering a variety of objective functions for different tasks.

When working on regression problems where mean squared error (MSE) is the primary evaluation metric, we used to use the "reg:linear" objective.

For example:

...
# Initialize an XGBRegressor with the "reg:linear" objective
model = XGBRegressor(objective="reg:linear")

This objective is now deprecated because it was confusing.

Using the "reg:linear" objective will result in a warning message:

reg:linear is now deprecated in favor of reg:squarederror.

Instead, we must use the "reg:squarederror" objective for XGBoost in order to optimize the mean squared error of the model.

This objective is designed to optimize the model directly for MSE, ensuring that the resulting model minimizes the average squared difference between the predicted and actual values.

Here’s a complete example demonstrating how to use the "reg:squarederror" objective instead of the "reg:linear" objective in XGBoost for a regression task:

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error

# Generate a synthetic dataset for regression
X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize an XGBRegressor with the "reg:squarederror" objective
model = XGBRegressor(objective="reg:squarederror", n_estimators=100, learning_rate=0.1)

# Fit the model on the training data
model.fit(X_train, y_train)

# Make predictions on the test set
y_pred = model.predict(X_test)

# Calculate the mean squared error of the predictions
mse = mean_squared_error(y_test, y_pred)
print(f"Mean Squared Error: {mse:.4f}")

In this example, we first generate a synthetic regression dataset using sklearn.datasets.make_regression. We then split the data into training and testing sets using train_test_split.

Next, we initialize an XGBRegressor with the "reg:squarederror" objective and set the number of estimators (n_estimators) and learning rate (learning_rate). We fit the model on the training data using model.fit.

After training, we make predictions on the test set using model.predict and calculate the mean squared error of the predictions using mean_squared_error from sklearn.metrics. Finally, we print the MSE to demonstrate the model’s performance.

By using the "reg:squarederror" objective, we ensure that the XGBoost model is optimized for mean squared error, providing more accurate predictions for our regression task.



See Also