MATH 101: Linear Algebra for Machine Learning
| Course Code | MATH 101 |
| Course Name | Linear Algebra for Machine Learning |
| Department | Mathematics |
| Semester Offered | Odd (Term 1) |
| Tuition Hours | 30 hours |
| Course Level | Foundational |
| Pre-requisite | - |
| Co-requisite | MATH 102: Calculus for Machine Learning |
| Course Objective | Linear Algebra is the operating system of modern AI. Every model your students will build this term, whether it's a recommendation engine, chatbot, or autonomous agent, runs on vector representations and matrix operations under the hood. This course is not about solving textbook problems for the sake of it. It is about understanding what your model is actually doing when it “learns.” When a neural network adjusts weights, it is transforming space. When embeddings are created, they are positioning meaning in high dimensions. The goal is simple: they should be able to reason about data geometrically, debug models intelligently, and avoid treating machine learning as a black box. |
| Course Philosophy | This course emphasizes
|
| Course Learning Outcomes | Upon successful completion of this course, students will be able to:
|
| Course Author | Sagar Udasi MSc Statistics and Data Science with Computational Finance from The University of Edinburgh. Contact: sagar.l.udasi@gmail.com |
| Course Organiser | TBD |
| No. | Lecture Title | Concepts Covered | Lecture Objective |
|---|---|---|---|
| 01 | Why Everything You Build Starts As A Vector | Vectors, feature representation, high-dimensional data | Students understand that every input (text, image, user behavior) must be converted into vectors before AI can use it. Direct link to their Term 1 product inputs. |
| 02 | The Geometry of Similarity | Dot product, cosine similarity, norms | Teaches how systems measure similarity. Critical for search, recommendations, and ranking in their AI microbusiness. |
| 03 | Matrices Are Functions, Not Tables | Matrix multiplication, linear transformations | Students see matrices as transformations of space, which directly maps to neural network layers. |
| 04 | What Actually Happens Inside One Neural Layer | Weighted sums, affine transformations | Connects linear algebra directly to forward pass in neural networks. Removes black-box thinking. |
| 05 | When Data Lives in 1000 Dimensions | High-dimensional spaces, curse of dimensionality | Helps students reason about embeddings and why intuition from 2D fails in ML systems. |
| 06 | Projection: Extracting What Matters | Vector projection, subspaces | Teaches how models focus on relevant features. Useful in feature selection and embeddings. |
| 07 | Orthogonality: The Idea of Independence | Orthogonal vectors, independence | Connects to uncorrelated features and clean representations in ML models. |
| 08 | The Magic Behind Dimensionality Reduction | Eigenvalues, eigenvectors, PCA intuition | Students learn how systems compress information. Useful for building efficient AI systems. |
| 09 | When Your Model Is Learning… What Is Actually Changing? | Basis change, transformations, parameter updates | Helps students interpret training as movement in vector space, not just loss minimization. |
| 10 | When Linear Algebra Breaks (And Why It Still Works) | Limits of linearity, non-linearity intuition | Prepares students to understand why neural networks stack linear layers with activations. |
| 11 | Embeddings: Turning Meaning Into Coordinates | Embedding spaces, semantic similarity | Direct application to LLMs, search, and AI agents they will build. |
| 12 | Building a Simple Recommendation Engine | Vector similarity in practice | Students implement a small system using similarity measures. Immediate product relevance. |
| 13 | Debugging Models With Geometry | Visualizing errors, vector intuition | Moves students away from blind tuning toward structured debugging. |
| 14 | Speed Matters: Efficient Matrix Computation | Sparse matrices, computational tricks | Helps students build systems that actually scale within constraints. |
| 15 | From Equations to Code | Translating math to NumPy/Python | Bridges gap between theory and implementation for their projects. |
| 16 | Case Study: How Your AI Agent Uses Linear Algebra | Real-world system breakdown | Shows how all concepts tie into their Term 1 AI agent. |
| 17 | Lab: Fixing a Broken Model | Applied debugging session | Students apply concepts to fix an intentionally flawed model. |
| 18 | Lab: Improving Recommendations | Optimization using similarity tweaks | Iteration mindset applied to a working system. |
| 19 | Project Integration Review | Applying concepts to student projects | Ensures students are actually using linear algebra in their product. |
| 20 | Final Synthesis: Thinking in Vectors | Conceptual integration | Students leave with a mental model, not a bag of formulas. |
| Component | Weightage |
|---|---|
| Written Examination (2 hours) | 50% |
| Practical Assignments (2 total) | 30% |
| Project Integration (Applied to Term 1 AI Product) | 20% |
| Type | Resource | Provider |
|---|---|---|
| Lecture | MIT 18.06 Linear Algebra | Prof. Gilbert Strang |
| Lecture | Essence of Linear Algebra | 3Blue1Brown (Grant Sanderson) |
| Reading | Linear Algebra and Its Applications | Gilbert Strang |
| Practical | NumPy Linear Algebra Documentation | NumPy |
| Practical | CS231n: Linear Algebra Review | Stanford University |