Some great questions here. Probably worthy of another article or two.
Regarding training sensitivity or response, it’s pretty challenging to develop a program that is dynamic enough to accommodate everyone, while still being usable. There’s some tech that can allow us to get close via the app, which is something I’ve been working on for years now. It’s not AI like other apps claim (they’re not using AI either), but rather a series of variables we monitor that change the program as needed. The rules are build on current evidence and our experience and would need some updating over time, perhaps based on large data sets that do not exist yet. For another discussion.
The current programming is our best guess for the intended demographic. As you mention, the autoregulation helps manage the training load to what we (the designers) are shooting for. Whether that training load and formulation works well for someone is mostly an educated guess, again based on evidence, collected data, and experience. It should work well for many people, but it definitely won’t work for **all **people. I like the training load of this program as-is and wouldn’t prospectively change it.
Regarding plateaus, it’s mostly an arbitrary line in the sand that a coach would draw regarding “how long is too long” without a demonstrable improvement. To my mind, I would expect to see an improvement in strength performance (by some metric) in ~ 2-3 weeks max for a relatively untrained individual. This also assumes accurate testing to begin with, e.g. not starting artificially light and adding weight. The more highly trained and/or more reduced resources someone has, e.g. sleep, nutrition, environmental inputs, etc., the longer I would stretch that time frame out…maybe up to ~ 5-6 weeks. When it comes to adjustments after a plateau, I think it’s better to think about total training load, which is composed of volume, average intensity, exercise selection, proximity to failure, and so on. We discuss this in some detail in the 80-page ebook (mostly on strength programming) accompanying the Low Fatigue Template.
We also have to discuss the Minimal “Clinically” Important Difference (MCID) in strength, i.e. what amount of strength change is representative of an actual improvement in strength. For example, 0.5% is probably noise, as created by biological and analytical variation. Conversely, a 10% improvement in strength is highly likely to be real, unless someone was sandbagging previously. I think somewhere around 5% is reasonable for the MCID.
If someone isn’t achieving > MCID improvements in strength in the above time frames, I will typically consider a programming change pending feedback on the environment, subjective experience, nutrition, sleep, and so on.