Skip to content

How To Design A Learning Journey That Works

Sagar Udasi, 13th December, 2025

Why learn anything? Why become an expert?

I think the core purpose is freedom. We learn to increase our degrees of freedom; so we can see more options, process them at complexity levels they demand, eventually make better choices, and act with less dependence on others. Expertise isn't about knowing more; it's about how less of permissions you need, how less dependent you are on others.

Now that we have a purpose to learn, the immediate question that follows is: why not everyone becomes an expert. In reality, everyone does want to become expert at something, but the learning journey seems very difficult in general, and they are most likely not in the right learning environment.

Why learning is so difficult? What makes the environment wrong?

Most education is designed hoping that learners are motivated enough to go through the process and somehow the desired outcome shall be achieved.

Real learning isn't even linear. It's more of Brownian motion of sorts. It happens in jumps, stalls, a bit of randomness, and occasional panic. Most learning journeys are designed to forcibly fit it in a linear pattern. The modules/skills learnt don't become useless, but delays or early exposures kills the curiosity. Students learn answers to questions they don't have yet.

Once student has lost the curiosity, they must rely on the only other force possible to get them through the journey: the discipline. However, curiosity pulls better at anyday than discipline pushes.

Some good learning journeys intend to not kill the curiosity, and keep that discomfort deliberately, to give the learning journey a real feel (Tetr, for example). However, the amount of discomfort must be calibrated and played with very cautiously. Too little discomfort, and students coast. Too much, and they freeze. Finding the sweet spot where learning compounds is usually not achieved by many learners, and they are not to be blamed because they are very much unaware about the terrain they are trying to figure out.

But also not everything that causes discomfort is good. Positive discomfort is brought by ambiguity, ownership, decision-making under uncertainty, higher scope of problems, and consequences of poor performance. Early on, students can be given structured problems with clear success criteria. But structure should decay quickly. By the later stages, the real challenge should not be how to do something, but what is worth doing at all.

How can we judge what will bring positive discomfort? Usually, it comes from the task, not confusion about instructions. It produces specific questions, not vague anxiety. It is time-bounded and followed by reflection. It makes students slightly more confident after surviving it.

Negative discomfort is brought by unnecessary tool complexity, jargon density, administrative overhead, and artificial rigor. Early mastery of tools is less important than early mastery of judgment. It’s better for students to deeply understand why a simple model fails than to shallowly deploy a complex one.

How can we judge what will bring negative discomfort? It usually comes from unclear expectations. It is caused by missing prerequisites. It persists due to lack of feedback. It teaches helplessness instead of resilience.

So, how should we learn? What makes an ideal learning journey?

How should we learn? What makes an ideal learning journey?

Derek Muller, in his video The Expert Myth1, has amazingly explained the four criteria it takes to become the expert in any field. Any learning environment that we create, any learning journey we design, must respect these four criteria as kind of the physics of learning.

I have my interpretation of these criteria, and how they become the laws to be respected while designing the learning journey.

Criteria 1: Many Repeated Attempts

What people hear is practice. Malcolm Gladwell suggested his famous 10,000 hours rule to become an expert. However, what actually matters is iteration velcoity. Naval Ravikant has upgraded the rule by suggesting 10,000 iterations to become an expert. You don't get good by doing hard things once. You get good by doing medium things many times. Most learning journeys kill repetition by making assignments too big, failures too expensive, and output too polished.

Thumb Rule: You only get repetition if failure is cheap.

People don’t avoid practice because they’re lazy. They avoid it because failure is expensive: socially, reputationally, emotionally, or financially. When mistakes feel like verdicts/judgements/losses, people stop iterating.

An ideal learning journey should have many small builds, not grand projects (early in their journey). They should encounter fast cycles, not semester-long suspense. They should make a huge number of disposable outputs, not sacred submissions.

Criteria 2: Non-Random, Valid Environment

Again, what people hear here is real-world problems. What actually matters is causal structure. Random environments teach superstition. Fake environments teach arrogance. Most universities accidentally create invalid environments: toy datasets, overt-clean problem statements, grades detached from consequences, etc.

Thumb Rule: Expertise only forms when effort maps reliably to outcomes.

The environment must push back honestly. This means, the constraints must be real but not random, success must be earned, and bad decisions must hurt (a little).

Criteria 3: Feedback As Quick As Possible

Again, what people here hear is feedback. What actually matters is how quick that feedback was. Feedback delayed past emotional closure is ignored. Feedback detached from action is decorative.

Thumb Rule: Learning happens when your internal model breaks; not when someone explains later.

Hence, grades at the end of the term are too late to teach anything meaningful. We need to collapse the distance between Action \(\rightarrow\) Consequence \(\rightarrow\) Reflection.

Criteria 4: Don't Get Too Comfortable

What people hear here is challenge. What actually matters is newer problems with rising ceilings, not just rising difficulties.

Thumb Rule: Comfort is evidence that learning has stopped. And constant overload is just as bad. The trick is right exposure.

With time, the problem ambiguity can increase, not just difficulty. The ideology should shift from “solve this” to “decide what matters”.


With all four criteria in place, expertise emerges automatically. Great learning systems feel demanding but fair, uncomfortable but addictive, and serious without being cruel.

What should we learn?

I will say either business or science & technology. I am a bit biased due to leverage these two skills can provide, but majorly these are two of my favorites (and looks like Tetr's too!). I will focus on AI as an illustrative example.

Before even choosing subjects or modules or skills required to be taught in the AI curriculum, I'd lock four principles:

  1. Our aim is not to be complete. Our aim is to be accurately directional. We optimize for trajectory, not coverage.
  2. Learning must be downstream of building. Concepts exist to solve problems one already feels. So, always problem before a concept.
  3. Judgement beats information. Knowing when to use which algorithm matters more than knowing every algorithm.
  4. The output is people, not transcripts. Grades don't ship, people do.

The natural order of learning is: concerete experience \(\rightarrow\) pattern recognition \(\rightarrow\) abstraction \(\rightarrow\) formalization. Universities usually reverse this. That’s why students feel lost. So the AI curriculum should start with visible behavior, not invisible math.

Year 1: Make Computers Do Things (immediately)

Goal: Remove Fear. Build Agency.

Students should ship something in the first few weeks! And to enable that, we should teach:

  • programming as thinking, not syntax
  • APIs as leverage
  • prompting models to do useful work
  • simple automation scripts
  • Data is something you collect, not something that appears on CSVs, excel sheets or SQL tables.
  • Build simple tiny AI tools: summarizers, recommenders, classifiers.

No AI theory just yet. Only, how can we make machines useful? Once students feel they can make something work, everything else compounds. Make them stop questioning, will I ever understand this? Make them ask, how can I do this better? That’s the moment education becomes self-propelling.

How are we incorporating the four expert criteria?

Criteria 1: We push for weekly builds. Same repeated tasks with variations must be promoted (e.g. make 5 different kinds of recommenders, not 1 perfect one). Grading is based on the number of iterations, not brilliance.

Criteria 2: I will prefer to avoid the messy details of real-world problem statements just yet for this year.

Criteria 3: We push for elements that provide immediate feedback. For eg. Code either runs or doesn't. Just relying on something that will work at the end of the semester shouldn't be acceptable. Ask them to generate images, plots, graphical reports, which should feel visibly wrong if incorrect. When system gives feedback, it's immediate and impersonal.

Criteria 4: In this year, we give them clear tasks, clear success criteria, low ambiguity statements, to get them into momentum.

Year 2: Why Things Work (Just Enough Theory)

Goal: Build intuition without drowning in rigor.

Now that students have built systems that work imperfectly, they’re ready to ask why they are imperfect or how I can make them better. Here, I'd love to introduce:

  • Vectors as representations, not arrows
  • Similarity as geometry, not formulae
  • Overfitting, training vs inference, evaluations, tradeoffs, etc., as lived experience
  • Basic ideas of probability & optimization

Math enters only where it explains observed behavior.

How are we incorporating the four expert criteria?

Criteria 1: We encourage them to build multiple failed models on the same dataset. Motivate them to rebuild the same idea but with different initial assumptions. The submissions need to happen in version 1, 2, 3 to the same problem statement.

Criteria 2: We can now introduce messy data, ambiguous evaluation, multiple right answers with unequal outcomes. Treat it like a game now.

Criteria 3: Evaluation metrics still remain instant. However, the issues while building processes, like overfitting, are felt and not defined. Students predict outcomes and anticipate errors before running or building models.

Criteria 4: Give students the variety of methods to tackle the same problem. Let them make them decisions. You get everything covered due to repetition. Here, they will see the tradeoffs firsthand due to their decision making.

Year 3: Building Systems That Survive Reality

Goal: Move from demos to products.

Most AI demos fail not because models are weak, but because systems are fragile. To build robust, real-world systems, students will need to know about:

  • Data pipelines
  • Feedback loops
  • Model drifts
  • Failure modes
  • Latency, cost, and scale tradeoffs
  • Engineering & design principles
  • Ethics and other constants you cannot ignore
  • Strategy
  • Behind the scenes math that powers the AI engine

Students should now be running real users, even if it’s just 20 people. Nothing teaches faster than watching users misuse your product. Alongwith building the product, they start amassing rigorous ideas and try to connect the dots independently, to come up with creative solutions. The motivation will be inter-disciplinary thinking and academic rigor for robustness and correctness of the solutions they build.

How are we incorporating the four expert criteria?

Criteria 1: Make learners iterate on shipping \(\rightarrow\) user feedback \(\rightarrow\) rebuild loops. They try to improve the product and see how their changes impact the product usage. Postmortems now become mandatory.

Criteria 2: Real users usually bring unpredictable behaviour. Metrics starts to conflict (accuracy vs. cost vs. latency). There are no longer clean and perfect solutions.

Criteria 3: Feedback comes in the form of data based on the user's usage of the product. Systems fail under load. Feedback now has types: social, technical, economical.

Criteria 4: Students choose problems in this year. Faculty stop giving “the right approach” because there is no right approach. The guidance definitely remains. But the handholding has reduced significantly.

Year 4: Taste, Judgment, and Original Work

Goal: Produce independent builders.

This year should feel uncomfortable in a new way: not technically, but existentially. Students should be forced to ask what problems are worth building, where AI fails to generate value, and how this product can evolve eventually.

The mega project should be one serious tech product attempt, with real users, real metrics, real failures, under a real company. At this stage, faculty become less like teachers and more like partners, directors, or critics.

How are we incorporating the four expert criteria?

Criteria 1: Iterations to reach PMF is something everyone has to do. They will come across the concept of sprint. Shorter the sprints, faster the shipment, better the chances to reach till PMF. Users, metrics, cost, latency force repetition. Faculty stop assigning; reality assigns.

Criteria 2: Students battle with environment filled with market reality, user churn, models drift, and unclear rubrics. The goal however remains very clear: the number of users OR the revenue.

Criteria 3: No instructor feedback unless requested. Metrics & dashboards replace grades. Reality becomes the examiner.

Criteria 4: This is the most uncomfortable environment but in a challenging way: only goals, only bets. Discomfort is about survival, not technical. They get the decision making relaxation if they want to even pivot. Everything has some cost and consequences.


This would be my attempt of creating an ideal learning environment! Now, let's see how does it look like when translated to curriculum!