XGBoosting Home | About | Contact | Examples

Configure XGBoost "use_label_encoder" Parameter

The use_label_encoder parameter in XGBoost was previously used to handle non-numerical labels in the target variable for classification tasks.

It automatically converted string labels to integers, simplifying the data preprocessing step.

It was only used in the XGBoost scikit-learn API, e.g. the XGBClassifier class.

However, this parameter is now deprecated and has no effect when set with integer class labels.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier

# Generate synthetic data
X, y = make_classification(n_samples=1000, n_classes=2, n_features=10, n_informative=5, n_redundant=2, random_state=42)

# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize the XGBoost classifier with use_label_encoder=True (deprecated)
model_deprecated = XGBClassifier(use_label_encoder=True, eval_metric='logloss')

# Initialize the XGBoost classifier with use_label_encoder=False (default)
model_default = XGBClassifier(use_label_encoder=False, eval_metric='logloss')

# Fit the models
model_deprecated.fit(X_train, y_train)
model_default.fit(X_train, y_train)

# Make predictions
predictions_deprecated = model_deprecated.predict(X_test)
predictions_default = model_default.predict(X_test)

In the example above, setting use_label_encoder=True or use_label_encoder=False does not result in a deprecation warning and has no impact on the model’s behavior.

Before the deprecation of use_label_encoder, the parameter automatically converted string labels to integers. This was convenient for users as it handled the label encoding process internally. However, the exact version in which the parameter was deprecated and the reasons behind its deprecation are not clearly documented.

Currently, the recommended approach for handling non-numerical labels in XGBoost is to manually convert string labels to integers using LabelEncoder or OrdinalEncoder from scikit-learn. Here’s an example:

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from xgboost import XGBClassifier

# Generate synthetic data with string labels
X, y = make_classification(n_samples=1000, n_classes=2, n_features=10, n_informative=5, n_redundant=2, random_state=42)
y = ['Class_' + str(label) for label in y]

# Manually encode string labels to integers
label_encoder = LabelEncoder()
y = label_encoder.fit_transform(y)

# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize the XGBoost classifier
model = XGBClassifier(eval_metric='logloss')

# Fit the model
model.fit(X_train, y_train)

# Make predictions
predictions = model.predict(X_test)

By manually encoding the labels before passing them to XGBoost, users have more control over the preprocessing step and can ensure compatibility with the current version of the library.



See Also