The XGBoost Linear Booster, also known as gblinear
, is an alternative to the default Tree Booster (gbtree
) in the XGBoost library.
While gbtree
is the most widely used booster, gblinear
can be particularly effective for datasets with high-dimensional sparse features, such as those commonly found in text classification tasks.
One advantage of using gblinear
is that it can be faster than gbtree
for certain types of data, especially when dealing with a large number of features. This is because gblinear
uses a coordinate descent algorithm to optimize the weights of the linear model, which can converge quickly for sparse data.
In this example, we’ll demonstrate how to use the gblinear
booster for a text classification task using the 20 Newsgroups dataset from scikit-learn.
import numpy as np
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score
# Load the 20 Newsgroups dataset
newsgroups = fetch_20newsgroups(subset='all', remove=('headers', 'footers', 'quotes'))
# Preprocess the text data using TfidfVectorizer
vectorizer = TfidfVectorizer(max_features=10000)
X = vectorizer.fit_transform(newsgroups.data)
y = newsgroups.target
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Initialize an XGBClassifier with gblinear booster
clf = XGBClassifier(booster='gblinear', feature_selector='shuffle',
updater='coord_descent', top_k=10, learning_rate=0.1,
n_estimators=100, random_state=42)
# Train the model
clf.fit(X_train, y_train)
# Make predictions on the test set
predictions = clf.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, predictions)
print(f"Accuracy: {accuracy:.4f}")
In this example, we first load the 20 Newsgroups dataset using fetch_20newsgroups()
from scikit-learn. We preprocess the text data using a TfidfVectorizer
to convert the raw text into a matrix of TF-IDF features.
Next, we split the data into training and testing sets, initialize an XGBClassifier
with the gblinear
booster, train the model, make predictions on the test set, and evaluate the model using accuracy.
When using the gblinear
booster, there are several hyperparameters to consider:
feature_selector
: Determines the type of feature selection to use. In this example, we use ‘shuffle’ to randomly select a subset of features for each boosting iteration.updater
: Specifies the algorithm used for weight optimization. We use ‘coord_descent’ for coordinate descent.top_k
: The number of top features to select in each boosting iteration.learning_rate
: The step size shrinkage used in update to prevents overfitting.n_estimators
: The number of boosting rounds.
The optimal values for these hyperparameters may vary depending on the specific dataset and problem at hand. It’s recommended to use techniques like grid search or random search to find the best combination of hyperparameters for your task.
By leveraging the XGBoost Linear Booster (gblinear
) and carefully tuning its hyperparameters, you can build efficient and effective models for text classification and other tasks involving high-dimensional sparse features.