The device
parameter in XGBoost allows you to specify whether to use the CPU or GPU for training your model.
By setting device
to gpu
, you can significantly speed up the training process, especially when working with large datasets, if GPU support is available.
Note that the device
parameter is only available if XGBoost was built with GPU support. You’ll need to install the GPU-enabled version of XGBoost and have a compatible GPU with CUDA installed.
Here’s an example of how to configure XGBoost to use the GPU:
from sklearn.datasets import make_classification
from xgboost import XGBClassifier
# Generate a synthetic binary classification dataset
X, y = make_classification(n_samples=10000, n_features=20, random_state=42)
# Create an XGBClassifier with GPU enabled
model = XGBClassifier(n_estimators=100, device='gpu', random_state=42)
# Train the model
model.fit(X, y)
In this example, we first generate a synthetic dataset using scikit-learn’s make_classification
function. We then create an instance of XGBClassifier
and set the device
parameter to 'gpu'
. Finally, we train the model on the synthetic dataset using the fit
method.
By setting device='gpu'
, XGBoost will automatically use the GPU for training, which can lead to significant speedups compared to using the CPU. However, keep in mind that the actual performance gains will depend on factors such as the size of your dataset, the complexity of your model, and the specifications of your GPU.