The `"reg:tweedie"`

objective in XGBoost is used for regression tasks where the target variable is non-negative and continuous.

It is based on the Tweedie distribution, which includes the Poisson, Gamma, and Gaussian distributions as special cases.

This objective is particularly useful when the target variable has a continuous distribution on the positive real line.

```
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
from sklearn.metrics import mean_tweedie_deviance
# Generate a synthetic dataset for regression with positive target values
X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)
y = abs(y)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Initialize an XGBRegressor with the "reg:tweedie" objective
model = XGBRegressor(objective="reg:tweedie", tweedie_variance_power=1.5, n_estimators=100, learning_rate=0.1)
# Fit the model on the training data
model.fit(X_train, y_train)
# Make predictions on the test set
y_pred = model.predict(X_test)
# Calculate the mean Tweedie deviance loss of the predictions
mtd = mean_tweedie_deviance(y_test, y_pred, power=1.5)
print(f"Mean Tweedie Deviance Loss: {mtd:.4f}")
```

When using the `"reg:tweedie"`

objective, keep the following tips in mind:

- Ensure that the target variable is non-negative and continuous.
- The
`tweedie_variance_power`

parameter specifies the Tweedie distribution to use. A value of 1.5 is common, but you can adjust it based on your data’s characteristics. - Scale the input features to a similar range to improve convergence and model performance.
- Use the mean Tweedie deviance loss for evaluation, as it is specifically designed for this objective.
- Tune hyperparameters like
`learning_rate`

,`max_depth`

, and`subsample`

to optimize performance.