Skip to content

MATH 102: Calculus for Machine Learning

Course Code MATH 102
Course Name Calculus for Machine Learning
Department Mathematics
Semester Offered Odd (Term 1)
Tuition Hours 30 hours
Course Level Foundational
Pre-requisite -
Co-requisite MATH 101: Linear Algebra for Machine Learning
Course Objective Calculus answers one question that sits at the heart of machine learning: how do systems improve?

Every model your students build this term will make mistakes. Calculus is the tool that tells the model how to adjust itself to reduce those mistakes. It is the reason learning is even possible.

This course starts from zero. We begin with functions and graphs, build intuition for change, and slowly move toward derivatives and optimization. By the end, students will understand how loss functions are minimized, how gradients guide learning, and how backpropagation works at a conceptual level.

The goal is not mathematical sophistication. The goal is to make sure that when a model trains, students know what is actually happening and why it works.
Course Philosophy This course emphasizes
  • Understanding change before computing derivatives
  • Visual intuition before symbolic manipulation
  • Optimization as the central idea, not an afterthought
We do not rush into formulas. Students first build an intuition for curves, slopes, and behavior. Only then do we formalize. Every concept ties back to how models learn and improve in practice.
Course Learning Outcomes Upon successful completion of this course, students will be able to:
  • Understand functions and their behavior through graphs and real-world interpretation.
  • Compute and interpret derivatives as rates of change.
  • Use derivatives to analyze and optimize functions, especially loss functions in ML.
  • Understand gradients conceptually as directions of steepest change.
  • Explain how gradient descent works, including learning rate and convergence issues.
  • Understand backpropagation at a high level, connecting it to chain rule and computational graphs.
  • Apply calculus intuition to debug training issues, such as vanishing gradients or slow convergence.
Course Author Sagar Udasi
MSc Statistics and Data Science with Computational Finance from The University of Edinburgh.
Contact: sagar.l.udasi@gmail.com
Course Organiser TBD
No. Lecture Title Concepts Covered Lecture Objective
01 Why Models Fail Before They Learn Functions, inputs and outputs, error intuition Students understand that ML is about reducing error over time. Sets the stage for optimization.
02 Drawing the World as Functions Graphs, mapping inputs to outputs Builds intuition for representing real-world problems as mathematical functions.
03 What Does "Change" Really Mean? Rate of change, slope intuition Introduces the core idea behind derivatives without formalism.
04 The Idea of a Derivative Limits (intuitive), slope at a point Students understand derivatives as local behavior of functions.
05 Computing Derivatives Without Fear Basic differentiation rules Enables students to compute slopes of simple functions used in ML.
06 Why Slopes Matter for Learning Increasing/decreasing functions, extrema Connects derivatives to optimization goals in ML models.
07 Finding the Minimum: Where Learning Happens Critical points, minima/maxima Students learn how models identify optimal parameters.
08 Gradient Descent: The Engine of Learning Iterative optimization, learning rate Core ML concept. Students understand how models improve step by step.
09 When Gradient Descent Fails Local minima, plateaus, divergence Prepares students to debug real training issues in their AI product.
10 The Chain Rule: The Secret Behind Deep Learning Composite functions, chain rule intuition Foundation for understanding backpropagation.
11 What Is Backpropagation Actually Doing? Computational graphs, gradient flow Students connect calculus to neural network training.
12 Loss Functions: Measuring Mistakes Properly MSE, cross-entropy intuition Teaches how errors are quantified in real systems.
13 Optimization Under Constraints Trade-offs, regularization intuition Helps students build models that generalize, not just fit data.
14 Efficiency Matters: Fast Computation of Gradients Vectorization intuition, computation graphs Connects calculus to performance in real systems.
15 From Equations to Code Implementing gradient descent in Python Students translate math into working code.
16 Case Study: Training a Simple Model from Scratch End-to-end training loop Shows full pipeline of learning using calculus.
17 Lab: Build Your Own Gradient Descent Hands-on implementation Students implement and observe learning behavior directly.
18 Lab: Fix a Model That Won’t Learn Debugging gradients Students apply concepts to fix real issues.
19 Project Integration Review Applying optimization to student AI agent Ensures students use calculus in their product training loop.
20 Final Synthesis: How Machines Actually Learn Conceptual integration Students leave with a clear mental model of learning systems.
Component Weightage
Written Examination (2 hours) 50%
Practical Assignments (2 total) 30%
Project Integration (Applied to Term 1 AI Product) 20%
Type Resource Provider
Lecture Calculus 1 (Full Course) MIT OpenCourseWare
Lecture Essence of Calculus 3Blue1Brown (Grant Sanderson)
Reading Calculus: Early Transcendentals James Stewart
Practical Neural Networks from Scratch Sentdex (YouTube)
Practical CS231n Optimization Notes Stanford University