XGBoosting Home | About | Contact | Examples

Configure XGBoost "reg_lambda" Parameter

The reg_lambda parameter in XGBoost is an alias for the lambda parameter, which controls the L2 regularization term on weights. By adjusting reg_lambda, you can influence the model’s complexity and its ability to generalize.

from xgboost import XGBClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

# Generate synthetic data
X, y = make_classification(n_samples=1000, n_features=20, n_informative=2, n_redundant=10, random_state=42)

# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize the XGBoost classifier with a lambda value
model = XGBClassifier(reg_lambda=0.5, eval_metric='logloss')

# Fit the model
model.fit(X_train, y_train)

# Make predictions
predictions = model.predict(X_test)

As discussed in the example on configuring the lambda parameter, reg_lambda determines the strength of the L2 regularization term on the weights in the XGBoost model. It is a regularization parameter that can help prevent overfitting by adding a penalty term to the objective function, which discourages large weights. reg_lambda accepts non-negative values, and the default value in XGBoost is 1.

To recap, the key points when configuring the reg_lambda parameter are:

For practical guidance on choosing the right reg_lambda value, refer to the tip on configuring the lambda parameter.

See Also