XGBoosting Home | About | Contact | Examples

XGBoost Compare "alpha" vs "reg_alpha" Parameters

The alpha and reg_alpha parameters in XGBoost control the L1 regularization term, which adds a penalty proportional to the absolute value of the weights, encouraging sparse feature selection.

Both parameters serve the same purpose but are used in different APIs: alpha is preferred in the native XGBoost API, while reg_alpha is used in the scikit-learn API, conforming to the scikit-learn convention.

This example demonstrates how to use both parameters and confirms that they have the same effect on the model’s performance.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier

# Generate a synthetic dataset
X, y = make_classification(n_samples=1000, n_features=10, n_classes=2, random_state=42)

# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create two XGBoost classifiers, one using "alpha" and the other using "reg_alpha"
model_alpha = XGBClassifier(alpha=0.1, eval_metric='logloss')
model_reg_alpha = XGBClassifier(reg_alpha=0.1, eval_metric='logloss')

# Train both models on the training set
model_alpha.fit(X_train, y_train)
model_reg_alpha.fit(X_train, y_train)

# Make predictions on the test set
predictions_alpha = model_alpha.predict(X_test)
predictions_reg_alpha = model_reg_alpha.predict(X_test)

# Compare the results
assert (predictions_alpha == predictions_reg_alpha).all()

The example below demonstrates the same functionality using the native XGBoost API with DMatrix:

import xgboost as xgb
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

# Generate a synthetic dataset
X, y = make_classification(n_samples=1000, n_features=10, n_classes=2, random_state=42)

# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Convert data to DMatrix
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)

# Set up parameters for XGBoost
params_alpha = {
    'objective': 'binary:logistic',
    'eval_metric': 'logloss',
    'alpha': 0.1
}

params_reg_alpha = {
    'objective': 'binary:logistic',
    'eval_metric': 'logloss',
    'reg_alpha': 0.1
}

# Train the models
model_alpha = xgb.train(params_alpha, dtrain, num_boost_round=10)
model_reg_alpha = xgb.train(params_reg_alpha, dtrain, num_boost_round=10)

# Make predictions on the test set
predictions_alpha = model_alpha.predict(dtest).round()
predictions_reg_alpha = model_reg_alpha.predict(dtest).round()

# Compare the results
assert (predictions_alpha == predictions_reg_alpha).all()

The alpha and reg_alpha parameters control the strength of the L1 regularization term in XGBoost.

A larger value (e.g., 1.0) will increase the regularization strength, leading to more sparse models, while a smaller value (e.g., 0.01) will decrease the regularization strength, allowing for more complex models.

The choice between alpha and reg_alpha ultimately depends on the API being used. When working with XGBoost, it is recommended to use alpha when using the native XGBoost API and reg_alpha when using the scikit-learn API.



See Also