XGBoosting Home | About | Contact | Examples

XGBoost Configure "mphe" Eval Metric

When training an XGBoost model for binary classification tasks with imbalanced datasets, the Mean Hinge Loss (MPHE) is a useful evaluation metric. MPHE is less sensitive to class imbalance than other metrics like accuracy or F1-score, making it a good choice when dealing with skewed class distributions.

By setting eval_metric='mphe', you can monitor your model’s performance during training and enable early stopping to prevent overfitting. This is particularly important with imbalanced datasets, where the model may quickly overfit to the majority class.

Here’s an example of how to use MPHE as the evaluation metric with XGBoost and scikit-learn:

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
import matplotlib.pyplot as plt

# Generate a synthetic imbalanced binary classification dataset
X, y = make_classification(n_samples=1000, n_classes=2, weights=[0.9, 0.1], random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create an XGBClassifier with MPHE as the evaluation metric
model = XGBClassifier(n_estimators=100, eval_metric='mphe', early_stopping_rounds=10, random_state=42)

# Train the model with early stopping
model.fit(X_train, y_train, eval_set=[(X_test, y_test)])

# Retrieve the MPHE values from the training process
results = model.evals_result()
epochs = len(results['validation_0']['mphe'])
x_axis = range(0, epochs)

# Plot the MPHE values
plt.figure()
plt.plot(x_axis, results['validation_0']['mphe'], label='Test')
plt.legend()
plt.xlabel('Number of Boosting Rounds')
plt.ylabel('MPHE')
plt.title('XGBoost MPHE Performance')
plt.show()

In this example, we generate a synthetic imbalanced binary classification dataset using scikit-learn’s make_classification function with a 9:1 class ratio. We then split the data into training and testing sets.

We create an instance of XGBClassifier and set eval_metric='mphe' to specify MPHE as the evaluation metric. We also set early_stopping_rounds=10 to enable early stopping if the MPHE doesn’t improve for 10 consecutive rounds.

During training, we pass the testing set as the eval_set to monitor the model’s performance on unseen data. After training, we retrieve the MPHE values using the evals_result() method.

Finally, we plot the MPHE values against the number of boosting rounds to visualize the model’s performance during training. This plot helps us assess whether the model is overfitting or underfitting and determines the optimal number of boosting rounds.

By using MPHE as the evaluation metric for imbalanced binary classification tasks, we can effectively monitor the model’s performance, prevent overfitting through early stopping, and select the best model based on the lowest MPHE value.



See Also