XGBoosting Home | About | Contact | Examples

Configure XGBoost "survival:aft" Objective

The “survival:aft” objective in XGBoost is tailored for survival analysis applications where the aim is to predict the time until an event occurs.

This objective is particularly useful in scenarios such as predicting patient survival times, time to failure of mechanical parts, or any event-time prediction problems.

This example provides a step-by-step guide on how to configure the “survival:aft” objective using a synthetic dataset.

import numpy as np
import xgboost as xgb
from sklearn.model_selection import train_test_split

# Generate a synthetic dataset for survival analysis
np.random.seed(42)
X = np.random.normal(size=(1000, 10))
beta = np.random.normal(size=10)
hazard = np.exp(X @ beta)
y = np.random.exponential(scale=1/hazard, size=1000)  # Simulating time-to-event data

# Create lower and upper bounds, here they are the same as y because there is no censoring
y_lower = y_upper = y

# Split the data into training and testing sets
X_train, X_test, y_train, y_test, y_lower_train, y_lower_test, y_upper_train, y_upper_test = train_test_split(X, y, y_lower, y_upper, test_size=0.2, random_state=42)

# Convert data into DMatrix, specifying the label, label_lower_bound, and label_upper_bound
dtrain = xgb.DMatrix(X_train, label=y_train, label_lower_bound=y_lower_train, label_upper_bound=y_upper_train)
dtest = xgb.DMatrix(X_test, label=y_test, label_lower_bound=y_lower_test, label_upper_bound=y_upper_test)

# Initialize an XGBRegressor with the "survival:aft" objective
params = {
    'objective': 'survival:aft',
    'eval_metric': 'aft-nloglik',
    'aft_loss_distribution': 'normal',
    'aft_loss_distribution_scale': 1.0,
    'n_estimators': 100,
    'learning_rate': 0.1
}

# Fit the model on the training data
bst = xgb.train(params, dtrain, num_boost_round=100)

# Make predictions on the test set
y_pred = bst.predict(dtest)

# Output the predicted survival times for demonstration purposes
print("Predicted survival times:", y_pred)

When configuring the “survival:aft” objective, it is crucial to select the right loss distribution and scale based on your dataset’s characteristics.

The example uses a normal distribution for the loss, which is a common choice for many survival data types. However, depending on the data, other distributions like logistic or extreme might be more appropriate.

Tips When Using “survival:aft” Objective:

  1. Loss Distribution Considerations: Choose a loss distribution (normal, logistic, or extreme) that closely matches the data’s error distribution. This choice significantly affects model performance.
  2. Feature Engineering: For survival analysis, consider transforming features that affect the hazard function or interact with time. This includes handling censored data appropriately, often using techniques like imputation or transformation.
  3. Hyperparameter Tuning: Adjusting learning_rate, max_depth, and other hyperparameters can significantly improve model performance. Experiment with different values to see what works best for your specific dataset.
  4. Evaluation: Use appropriate survival analysis metrics, like the Concordance Index (CI), to evaluate model performance, as traditional regression metrics might not adequately capture the predictive accuracy of survival models.


See Also