This example provides a pseudocode description of the key functions and tasks in the XGBoost algorithm.

## 1. Regularized Learning Objective

```
Function CalculateObjective(predictions, targets, trees):
total_loss = 0
for i in range(len(targets)):
prediction_error = loss_function(predictions[i], targets[i])
regularization_term = sum(Ω(tree) for tree in trees)
total_loss += prediction_error + regularization_term
return total_loss
```

## 2. Gradient Tree Boosting

```
Function GradientBoosting(data, num_trees):
model = initialize_model()
for t in range(num_trees):
residuals = calculate_residuals(model.predict(data), data.targets)
tree = build_tree(data.features, residuals)
model.add_tree(tree, learning_rate)
return model
```

## 3. Tree Structure Calculation

```
Function CalculateOptimalWeights(gradients, hessians, lambda):
for each leaf j:
numerator = sum(gradients[i] for i in leaf j)
denominator = sum(hessians[i] for i in leaf j) + lambda
leaf_weight = -numerator / denominator
set_weight(leaf j, leaf_weight)
```

## 4. Split Finding

### 4.1 Exact Greedy Algorithm

```
Function ExactGreedySplit(data, gradients, hessians):
best_gain = 0
best_split = None
for feature in data.features:
sort_data_by_feature(feature)
for split_point in possible_split_points:
gain = calculate_split_gain(split_point, gradients, hessians)
if gain > best_gain:
best_gain = gain
best_split = split_point
return best_split
```

### 4.2 Approximate Algorithm

```
Function ApproximateSplit(data, gradients, hessians, num_buckets):
candidate_splits = generate_candidate_splits(data, num_buckets)
best_gain = 0
best_split = None
for split in candidate_splits:
gain = calculate_split_gain(split, gradients, hessians)
if gain > best_gain:
best_gain = gain
best_split = split
return best_split
```

## 5. Sparsity-aware Split Finding

```
Function SparsityAwareSplit(data, gradients, hessians, default_direction):
best_gain = 0
best_split = None
for feature in data.features:
for direction in ["left", "right"]:
set_default_direction(feature, direction)
gain = calculate_split_gain_with_default(data, gradients, hessians)
if gain > best_gain:
best_gain = gain
best_split = (feature, direction)
return best_split
```

## 6. Tree Pruning and Growing Criteria

```
Function ShouldContinueSplit(current_depth, max_depth, min_loss_reduction):
if current_depth >= max_depth or min_loss_reduction <= 0:
return False
return True
```

## 7. Column (Feature) Subsampling

```
Function FeatureSubsampling(features, subsample_ratio):
num_features_to_select = int(len(features) * subsample_ratio)
selected_features = random_sample(features, num_features_to_select)
return selected_features
```

## 8. Shrinkage (Learning Rate)

```
Function UpdateTreeWeights(trees, shrinkage_rate):
for tree in trees:
for leaf in tree.leaves:
leaf.weight *= shrinkage_rate
```

## 9. Regularization Techniques

```
plaintext CalculateRegularization(tree, gamma, lambda):
regularization_cost = gamma * number_of_leaves(tree) + lambda * sum(leaf.weight^2 for leaf in tree.leaves)
return regularization_cost
```

## 10. Out-of-core Computing

```
Function LoadDataInChunks(file_path, chunk_size):
while more_data_available(file_path):
chunk = read_next_chunk(file_path, chunk_size)
yield chunk
```

This pseudocode gives a structured representation of each major aspect of the XGBoost algorithm based on the subtasks and functions outlined in the paper.

Adjustments might be necessary based on specific implementation details or optimizations.

Refer to the XGBoost paper and source code for a more complete description.