Empirical Cycling Podcast
Understanding how to conduct personalized experiments empowers athletes to tailor training to their unique physiology, leading to more efficient performance gains. As data-driven training becomes mainstream, mastering n=1 methods helps cyclists cut through generic advice and make evidence‑based adjustments, making the episode especially relevant for anyone looking to optimize their training with measurable results.
The episode demystifies the n=1 training experiment, a data‑driven framework that treats each rider as a single‑subject study. By framing workouts as hypotheses—such as “adding 10 minutes of high‑intensity intervals will raise FTP”—cyclists can move beyond generic training plans and directly measure what works for them. This approach resonates with performance‑focused businesses because it aligns coaching resources with measurable outcomes, reduces trial‑and‑error, and creates a feedback loop that accelerates marginal gains. Listeners learn why individualized experimentation is becoming a cornerstone of modern cycling analytics.
Host outlines a step‑by‑step protocol: first, establish a robust baseline using power meter, heart‑rate, and perceived exertion logs collected over two to three weeks. Next, select a single variable—duration, intensity, cadence, or recovery—and hold all other training inputs constant. The experiment runs for a predefined block, typically four to six weeks, with consistent weekly testing (e.g., 20‑minute FTP test) to capture performance shifts. Tools such as Strava, TrainingPeaks, or open‑source R scripts help automate data extraction, while simple statistical checks like paired t‑tests or Bayesian credible intervals flag meaningful change.
Interpreting the results requires balancing statistical significance with practical relevance. A modest 1‑2% FTP increase may be statistically real yet operationally negligible for elite squads, whereas a larger swing can justify program redesign. Coaches are encouraged to document assumptions, adjust training loads, and repeat the cycle, turning each experiment into a continuous improvement loop. For businesses, this methodology translates into higher client retention, evidence‑based marketing, and scalable personalization services. The episode equips listeners with a reproducible template to turn raw cycling data into actionable performance strategy.
We tackle the ideas of unique responders, the role of performance variability in differentiating training response between individuals, pitfalls in interpreting typical individual response data, and the experimental setup required to actually tease these things apart. Then we walk through a couple easy principles to apply the takeaways in your own training.
Comments
Want to join the conversation?
Loading comments...