XGBoosting Home | About | Contact | Examples

Evaluate XGBoost Performance with the Recall Metric

Evaluating the performance of a classification model is crucial to understanding its effectiveness in correctly identifying instances of each class.

Recall, also known as sensitivity or true positive rate, is a key metric for assessing a classifier’s ability to identify positive instances.

Recall measures the proportion of actual positive instances that are correctly identified by the model. It is calculated as the ratio of true positive predictions to the total number of actual positive instances (true positives + false negatives). A high recall score indicates that the model is able to correctly identify a large proportion of the positive instances.

Here’s an example of how to calculate the recall score for an XGBoost classifier using the scikit-learn library in Python:

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.metrics import recall_score

# Generate a synthetic dataset for binary classification
X, y = make_classification(n_samples=1000, n_classes=2, random_state=42)

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize and train the XGBoost classifier
model = XGBClassifier(random_state=42)
model.fit(X_train, y_train)

# Make predictions on the test set
y_pred = model.predict(X_test)

# Calculate the recall score
recall = recall_score(y_test, y_pred)

print(f"Recall Score: {recall:.2f}")

In this example:

  1. We generate a synthetic dataset for a binary classification problem using make_classification from scikit-learn.
  2. We split the data into training and testing sets using train_test_split.
  3. We initialize an XGBoost classifier and train it on the training data using fit().
  4. We make predictions on the test set using the trained model’s predict() method.
  5. We calculate the recall score using scikit-learn’s recall_score function, which takes the true labels (y_test) and predicted labels (y_pred) as arguments.
  6. Finally, we print the recall score to evaluate the model’s performance.

By calculating the recall score, we can assess how well the XGBoost classifier is performing in terms of correctly identifying positive instances. This metric provides valuable insights into the model’s ability to minimize false negatives and can help guide further improvements or model selection decisions.



See Also