XGBoosting Home | About | Contact | Examples

XGBoost Single-Threaded Training and Prediction (no threads)

XGBoost is a powerful library that can take advantage of multi-threading to speed up model training and prediction.

However, in some cases, you may want to restrict XGBoost to use only a single thread.

This can be useful for debugging, benchmarking, or running XGBoost in resource-constrained environments.

To control the number of threads used by XGBoost, you can set the n_jobs, nthread, and OMP_NUM_THREADS parameters.

Here’s a quick example of how to train an XGBoost model for binary classification using single-threaded execution:

# XGBoosting.com
import os
# Set environment variable to limit OpenMP to 1 thread
os.environ["OMP_NUM_THREADS"] = "1"

from sklearn.datasets import make_classification
from xgboost import XGBClassifier

# Generate a synthetic dataset for binary classification
X, y = make_classification(n_samples=1000, n_classes=2, random_state=42)

# Initialize XGBClassifier for single-threaded execution
model = XGBClassifier(objective='binary:logistic', n_jobs=1, nthread=1, random_state=42)

# Train the model
model.fit(X, y)

# Make predictions
predictions = model.predict(X)
print(predictions[:5])

In this example:

  1. We set the OMP_NUM_THREADS environment variable to “1” to limit OpenMP to a single thread.
  2. We initialize an XGBClassifier with objective='binary:logistic' for binary classification, and set n_jobs=1 and nthread=1 to ensure single-threaded execution.
  3. We train the model using the fit() method and make predictions on the input data using predict().

By following these steps, you can effectively restrict XGBoost to use only a single thread for training and prediction, providing more control over the model’s execution.



See Also