XGBoosting Home | About | Contact | Examples

Configure XGBoost "n_jobs" Parameter

The n_jobs parameter in XGBoost controls the number of parallel threads used for training, which can significantly speed up the training process on multi-core machines.

The n_jobs parameter determines the number of CPU cores used for parallel processing during the training of an XGBoost model.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier

# Generate synthetic data
X, y = make_classification(n_samples=100000, n_features=20, n_informative=10, n_redundant=5, random_state=42)

# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize the XGBoost classifier with a specific n_jobs value
model = XGBClassifier(n_jobs=4, eval_metric='logloss')

# Fit the model
model.fit(X_train, y_train)

The n_jobs parameter is an alias for the (deprecated) nthread parameter in the XGBoost API.

You can learn more about the nthread parameter in the examples:

The n_jobs parameter accepts integer values, with -1 indicating that all available cores should be used.

It’s important to note that n_jobs only affects the training speed and has no impact on the model’s performance.

Choosing the Right “n_jobs” Value

When setting the n_jobs parameter, there is a trade-off between training speed and resource consumption:

Consider the following guidelines when choosing the n_jobs value:

You can learn more about configuring the n_jobs parameter in the example:

Practical Tips



See Also