XGBoosting Home | About | Contact | Examples

Float Input Features for XGBoost

XGBoost accepts float input features without any special preprocessing.

Here’s a concise code example demonstrating how to use a float feature matrix with XGBoost:

import numpy as np
from xgboost import XGBRegressor

# Synthetic float feature matrix X
X = np.array([[2.5, 1.0, 3.0],
              [5.0, 2.0, 4.0],
              [3.0, 1.5, 3.5],
              [1.0, 0.5, 2.0],
              [4.5, 1.8, 4.2]])

# Target variable y
y = np.array([10, 20, 15, 5, 18])

# Initialize and train XGBoost model
model = XGBRegressor(random_state=42)
model.fit(X, y)

# New float data for prediction
X_new = np.array([[3.2, 1.3, 3.4],
                  [1.5, 0.8, 2.3]])

# Make predictions
predictions = model.predict(X_new)

print("Predictions:", predictions)

This code demonstrates:

  1. Creating a synthetic float feature matrix X. XGBoost can handle float features directly without the need for any special encoding or scaling.

  2. Initializing an XGBRegressor with a random_state for reproducibility. You can use any desired hyperparameters here.

  3. Training the model on the float features X and target variable y.

  4. Making predictions on new float data X_new. The model will output predictions based on the patterns it learned from the training data.

By using float features directly with XGBoost, you can streamline your data preparation pipeline and focus on tuning the model’s hyperparameters to achieve the best performance for your specific problem.



See Also