Home
Blog
Courses
LLMs
EN
All Courses
Mastering Gradient Boosting Algorithms
Chapter 1: Gradient Boosting Foundations Revisited
Ensemble Methods: A Recap
Decision Trees as Base Learners
The Additive Modeling Framework
Gradient Descent Fundamentals
Introducing the Gradient Boosting Machine (GBM)
Chapter 2: The Gradient Boosting Algorithm in Depth
Functional Gradient Descent
Deriving the Generic GBM Algorithm
Common Loss Functions for Regression
Common Loss Functions for Classification
The Role of Shrinkage (Learning Rate)
Subsampling Techniques (Stochastic Gradient Boosting)
Implementing GBM with Scikit-learn
Practice: Building a Basic GBM Model
Chapter 3: Regularization in Gradient Boosting
Overfitting Challenges in Boosting
Tree Constraints: Depth, Nodes, and Splits
Shrinkage as Implicit Regularization
Subsampling (Stochastic Gradient Boosting)
Regularized Objective Functions (L1/L2)
Early Stopping Strategies
Hands-on Practical: Applying Regularization
Chapter 4: XGBoost: Extreme Gradient Boosting
Motivation and Enhancements over GBM
The Regularized Learning Objective
Split Finding Algorithm: Exact Greedy
Split Finding Algorithm: Approximate Greedy
Sparsity-Aware Split Finding
System Optimizations: Cache Awareness and Parallelism
XGBoost API: Parameters and Configuration
Hands-on Practical: Implementing XGBoost
Chapter 5: LightGBM: Light Gradient Boosting Machine
Motivation: Addressing XGBoost's Limitations
Gradient-based One-Side Sampling (GOSS)
Exclusive Feature Bundling (EFB)
Histogram-Based Split Finding
Leaf-Wise Tree Growth
Optimized Categorical Feature Handling
LightGBM API: Parameters and Configuration
Hands-on Practical: Implementing LightGBM
Chapter 6: CatBoost: Gradient Boosting on Decision Trees
Motivation: Challenges with Categorical Data
Ordered Target Statistics (Ordered TS)
Addressing Prediction Shift: Ordered Boosting
Handling Feature Combinations
Oblivious Trees
GPU Training Acceleration
CatBoost API: Parameters and Configuration
Hands-on Practical: Implementing CatBoost
Chapter 7: Advanced Topics and Customization
Model Interpretability with SHAP
TreeSHAP for Gradient Boosting Models
Global vs. Local Explanations
Probability Calibration for Classification
Implementing Custom Loss Functions
Implementing Custom Evaluation Metrics
Handling Imbalanced Datasets with Boosting
Practice: Custom Objectives and SHAP
Chapter 8: Hyperparameter Optimization Strategies
The Importance of Hyperparameter Tuning
Identifying Critical Hyperparameters
Systematic Tuning: Grid Search and Randomized Search
Advanced Tuning: Bayesian Optimization
Hyperparameter Optimization Frameworks (Optuna, Hyperopt)
Tuning Strategy: From Coarse to Fine
Cross-Validation Strategies for Tuning
Hands-on Practical: Advanced Tuning with Optuna
Chapter 9: Gradient Boosting for Specialized Tasks
Learning to Rank with Gradient Boosting
Ranking Objective Functions (Pairwise, Listwise)
Survival Analysis with Gradient Boosting
Survival Objective Functions (Cox PH)
Quantile Regression with Gradient Boosting
Quantile Loss Function Implementation
Multi-Output Gradient Boosting
Practice: Implementing Ranking with XGBoost
Regularized Objective Functions (L1/L2)
Was this section helpful?
Helpful
Report Issue
Mark as Complete
© 2025 ApX Machine Learning
Regularized Objective Functions (L1/L2)