XGBoosting Home | About | Contact | Examples

XGBoost "gbtree" vs "gblinear" booster

XGBoost offers two main boosters: “gbtree” (tree-based) and “gblinear” (linear).

The choice of booster depends on the nature of the problem and the characteristics of the data.

This example demonstrates the differences between the two boosters and provides guidance on when to use each.

import numpy as np
from sklearn.datasets import make_classification, make_regression
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier, XGBRegressor
from sklearn.metrics import accuracy_score, mean_squared_error

# Generate synthetic datasets for classification and regression
X_clf, y_clf = make_classification(n_samples=1000, n_features=10, n_classes=2, random_state=42)
X_reg, y_reg = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)

# Split the data into training and testing sets
X_train_clf, X_test_clf, y_train_clf, y_test_clf = train_test_split(X_clf, y_clf, test_size=0.2, random_state=42)
X_train_reg, X_test_reg, y_train_reg, y_test_reg = train_test_split(X_reg, y_reg, test_size=0.2, random_state=42)

# Initialize XGBoost models with "gbtree" and "gblinear" boosters
clf_gbtree = XGBClassifier(booster='gbtree', max_depth=5, learning_rate=0.1, n_estimators=100)
clf_gblinear = XGBClassifier(booster='gblinear', learning_rate=0.1, n_estimators=100)
reg_gbtree = XGBRegressor(booster='gbtree', max_depth=5, learning_rate=0.1, n_estimators=100)
reg_gblinear = XGBRegressor(booster='gblinear', learning_rate=0.1, n_estimators=100)

# Train the models
clf_gbtree.fit(X_train_clf, y_train_clf)
clf_gblinear.fit(X_train_clf, y_train_clf)
reg_gbtree.fit(X_train_reg, y_train_reg)
reg_gblinear.fit(X_train_reg, y_train_reg)

# Make predictions on the test sets
clf_gbtree_pred = clf_gbtree.predict(X_test_clf)
clf_gblinear_pred = clf_gblinear.predict(X_test_clf)
reg_gbtree_pred = reg_gbtree.predict(X_test_reg)
reg_gblinear_pred = reg_gblinear.predict(X_test_reg)

# Evaluate the models
clf_gbtree_acc = accuracy_score(y_test_clf, clf_gbtree_pred)
clf_gblinear_acc = accuracy_score(y_test_clf, clf_gblinear_pred)
reg_gbtree_mse = mean_squared_error(y_test_reg, reg_gbtree_pred)
reg_gblinear_mse = mean_squared_error(y_test_reg, reg_gblinear_pred)

print(f"Classification Accuracy (gbtree): {clf_gbtree_acc:.4f}")
print(f"Classification Accuracy (gblinear): {clf_gblinear_acc:.4f}")
print(f"Regression MSE (gbtree): {reg_gbtree_mse:.4f}")
print(f"Regression MSE (gblinear): {reg_gblinear_mse:.4f}")

In this example, we generate synthetic datasets for both classification and regression tasks using make_classification() and make_regression() from scikit-learn. We then split the data into training and testing sets, initialize XGBoost models with “gbtree” and “gblinear” boosters, train the models, make predictions on the test sets, and evaluate the models using accuracy for classification and mean squared error (MSE) for regression.

The “gbtree” booster is the default booster in XGBoost and is based on decision trees. It is suitable for capturing complex non-linear relationships in the data.

On the other hand, the “gblinear” booster is based on linear functions and is better suited for problems with a large number of features and sparse data.

In general, “gbtree” tends to perform better on structured data with a moderate number of features, while “gblinear” may be a better choice for high-dimensional, sparse datasets.

However, the performance of each booster can vary depending on the specific problem and data characteristics, so it’s always a good idea to experiment with both boosters and compare their results.

When choosing between “gbtree” and “gblinear”, consider the following guidelines:

By understanding the differences between XGBoost’s “gbtree” and “gblinear” boosters and knowing when to use each, you can make informed decisions and select the most appropriate booster for your machine learning tasks.



See Also