XGBoosting Home | About | Contact | Examples

XGBoost Configure "cox-nloglik" Eval Metric

Survival analysis deals with predicting the time until an event of interest occurs, such as customer churn or mechanical failure. The Cox proportional hazards model is a popular choice for this task.

When training an XGBoost model for survival analysis, the Cox negative log-likelihood (“cox-nloglik”) is an appropriate evaluation metric. It measures the model’s performance by comparing the predicted risk scores with the actual survival times and event indicators.

By setting eval_metric='cox-nloglik', you can monitor your model’s performance during training and use early stopping to prevent overfitting.

Here’s an example of how to use “cox-nloglik” as the evaluation metric with XGBoost’s native API:

import numpy as np
from xgboost import XGBRegressor
import matplotlib.pyplot as plt

# Generate a synthetic dataset
np.random.seed(42)
X = np.random.normal(size=(1000, 10))
y = np.random.exponential(scale=2, size=1000)  # Simulating survival times

# Create an XGBSurvivalTraining object with "cox-nloglik" as the evaluation metric
model = XGBRegressor(
    n_estimators=100,
    eval_metric='cox-nloglik',
    early_stopping_rounds=10,
    random_state=42
)

# Train the model with early stopping
model.fit(X, y, eval_set=[(X, y)])

# Retrieve the evaluation metric values
results = model.evals_result()
epochs = len(results['validation_0']['cox-nloglik'])
x_axis = range(0, epochs)

# Plot the negative log-likelihood values
plt.figure()
plt.plot(x_axis, results['validation_0']['cox-nloglik'], label='Validation')
plt.legend()
plt.xlabel('Number of Boosting Rounds')
plt.ylabel('Cox Negative Log-Likelihood')
plt.title('XGBoost Cox Negative Log-Likelihood Performance')
plt.show()

In this example, we generate a synthetic survival dataset.

We create an instance of XGBRegressor and set eval_metric='cox-nloglik' to specify the Cox negative log-likelihood as the evaluation metric. We also set early_stopping_rounds=10 to enable early stopping if the metric doesn’t improve for 10 consecutive rounds.

During training, we pass the same data as the eval_set to monitor the model’s performance. After training, we retrieve the metric values using the evals_result() method.

Finally, we plot the negative log-likelihood values against the number of boosting rounds to visualize the model’s performance during training. This plot helps us assess whether the model is overfitting or underfitting and determines the optimal number of boosting rounds.

By using “cox-nloglik” as the evaluation metric, we can effectively monitor the model’s survival analysis performance, prevent overfitting through early stopping, and select the best model based on the lowest negative log-likelihood value.



See Also