The get_xgb_params()
method in allows you to access the trained parameters of an XGBoost model.
Accessing these parameters can be useful for model analysis, interpretation, and deployment.
This example demonstrates how to use get_xgb_params()
to retrieve and utilize the trained model parameters.
# XGboosting.com
# XGBoost get_xgb_params() Method
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
# Load the Housing dataset
housing = fetch_california_housing()
X, y = housing.data, housing.target
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train an XGBoost model
model = XGBRegressor(n_estimators=100, learning_rate=0.1, max_depth=3, random_state=42)
model.fit(X_train, y_train)
# Access the trained model parameters using get_xgb_params()
params = model.get_xgb_params()
# Print the retrieved parameters
print("Trained XGBoost model parameters:")
for param, value in params.items():
print(f"{param}: {value}")
The get_xgb_params()
method returns a dictionary containing the trained model parameters. These parameters include:
- Booster parameters: These are general parameters that control the behavior of the XGBoost model, such as
learning_rate
,n_estimators
, andsubsample
. - Tree-specific parameters: These parameters determine the structure and complexity of the individual decision trees within the XGBoost model, such as
max_depth
,min_child_weight
, andgamma
. - Objective and evaluation metric parameters: These parameters define the optimization objective and the evaluation metric used during training, such as
objective
andeval_metric
.
By examining these parameters, you can gain insights into the trained model’s configuration and make informed decisions about model interpretation and deployment.
Here are some practical tips for using get_xgb_params()
:
- Ensure that the model is trained before calling
get_xgb_params()
. The method will only return the parameters of a trained model. - Use the retrieved parameters for various purposes, such as saving and loading trained models, fine-tuning or updating model parameters, or comparing parameters across different models.
- Handle the returned dictionary appropriately. You can access specific parameters using dictionary keys or iterate over the dictionary items to perform further analysis or logging.
By leveraging the get_xgb_params()
method, you can easily access and utilize the trained parameters of an XGBoost model in scikit-learn, enabling better model understanding and control.