XGBoost is a powerful tool for regression tasks.
Regression involves predicting continuous output values. XGBoost can perform various types of regression tasks (linear, non-linear) depending on the loss function used (like squared loss for linear regression).
Here’s a quick guide on how to fit an XGBoost model for regression using the scikit-learn API.
# xgboosting.com
# Fit an XGBoost Model for Regression using scikit-learn API
from sklearn.datasets import make_regression
from xgboost import XGBRegressor
# Generate a synthetic dataset with 5 features
X, y = make_regression(n_samples=1000, n_features=5, noise=0.1, random_state=42)
# Initialize XGBRegressor
model = XGBRegressor(objective='reg:squarederror', random_state=42)
# Fit the model to training data
model.fit(X, y)
# Make predictions with the fit model
predictions = model.predict(X)
print(predictions[:5])
In just a few lines of code, you can have a working XGBoost model for regression:
- Initialize an
XGBRegressor
with the appropriateobjective
(here,'reg:squarederror'
for regression). - Fit the model to your training data using
fit()
.