XGBoosting Home | About | Contact | Examples

Configure XGBoost Exact Tree Method (tree_method=exact)

The exact tree method is the default and most precise way for XGBoost to build trees.

It enumerates all possible splits to find the optimal one, leading to better performance, especially on small datasets, but at the cost of slower training times compared to approximate methods.

Here’s an example demonstrating how to configure an XGBoost model with the exact tree method for a regression task using a synthetic dataset:

from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error

# Generate a synthetic regression dataset
X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)

# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize an XGBRegressor with exact tree method
model = XGBRegressor(tree_method='exact', max_depth=5, learning_rate=0.1, n_estimators=100)

# Train the model
model.fit(X_train, y_train)

# Make predictions on the test set
predictions = model.predict(X_test)

# Evaluate the model
mse = mean_squared_error(y_test, predictions)
print(f"Mean Squared Error: {mse:.4f}")

In this example, we first generate a synthetic regression dataset using make_regression() from scikit-learn. We then split the data into training and testing sets.

Next, we initialize an XGBRegressor with tree_method='exact' and set several other hyperparameters:

We then train the model using the fit() method, make predictions on the test set using predict(), and evaluate the model’s performance using mean_squared_error().

Although using the exact tree method can be computationally expensive, it can lead to improved performance, particularly on smaller datasets or when high precision is required. However, for large datasets or when training time is a concern, approximate methods like tree_method='approx' or tree_method='hist' may be preferred.

As with any model, it’s essential to tune the hyperparameters to find the optimal balance between model complexity and generalization. Experiment with different values for max_depth, learning_rate, and n_estimators to find the best combination for your specific problem.



See Also