Calculus Fundamentals for Machine Learning
Chapter 1: Why Calculus for Machine Learning?
The Role of Math in Machine Learning
Introducing Functions: Inputs and Outputs
Visualizing Functions: Graphs
Connecting Functions and Limits to ML
Chapter 2: Derivatives: Measuring Change
Rate of Change: Average vs Instantaneous
The Derivative: Slope of a Tangent Line
Derivative Notation (Leibniz and Lagrange)
Calculating Derivatives: The Power Rule
Calculating Derivatives: Constants and Sums
Introduction to Higher-Order Derivatives
Practice: Calculating Simple Derivatives
Chapter 3: Optimization with Derivatives
Finding Maximum and Minimum Points
Optimization: Why Minimize or Maximize?
Cost Functions in Machine Learning
Goal: Minimizing the Cost Function
Introduction to Gradient Descent
How Derivatives Guide Gradient Descent
Visualizing Gradient Descent
Chapter 4: Handling Multiple Inputs: Partial Derivatives
Functions of Multiple Variables
Partial Derivatives: The Concept
Calculating Partial Derivatives
Partial Derivative Notation
Geometric Meaning of the Gradient
Practice: Calculating Partial Derivatives and Gradients
Chapter 5: Calculus in Action: Simple Optimization
Recap: Optimization Goal and Gradient Descent
Example: Simple Linear Regression Model
Defining a Cost Function for Linear Regression
Calculating Gradients for the Cost Function
Performing a Gradient Descent Step
The Learning Rate Parameter
Putting It All Together: The Optimization Process
Hands-on Practical: Manual Gradient Calculation