XGBoosting Home | About | Contact | Examples

XGBoost Robust to Redundant Input Features

XGBoost’s built-in feature selection capabilities make it robust to redundant or irrelevant input features.

This property can simplify data preparation and feature engineering, saving time and effort.

from sklearn.datasets import make_classification
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Generate a synthetic dataset with redundant features
X, y = make_classification(n_samples=1000, n_features=20, n_informative=5, n_redundant=10, random_state=42)

# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize and train an XGBoost classifier
model = XGBClassifier(n_estimators=100, learning_rate=0.1, random_state=42)
model.fit(X_train, y_train)

# Make predictions on the test set
y_pred = model.predict(X_test)

# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print(f"Test accuracy: {accuracy:.2f}")

# Get feature importances
importances = model.feature_importances_
print("Feature importances:", importances)

The code above demonstrates XGBoost’s robustness by training a model on a synthetic dataset with 10 redundant features. Despite the presence of irrelevant features, XGBoost can still achieve good accuracy. The feature_importances_ attribute shows how XGBoost automatically assigns low importance to redundant features.

As a general guideline, it’s still recommended to perform feature selection and remove clearly irrelevant features before training an XGBoost model. This can making model training faster and more efficient. However, in cases where the relevance of features is uncertain or the feature space is large, XGBoost’s robustness can be a significant advantage, allowing you to train models without extensive feature engineering.



See Also