Every time a debate pops up in the running or fitness world, someone inevitably drops the magic phrase: “Well, I’m evidence-based.” Cue the PubMed IDs, the “systematic reviews,” and the implication that if you don’t cite a stack of studies, your argument doesn’t matter.
Here’s the uncomfortable truth: research isn’t flawless. It’s messy. It’s often contradictory. It’s frequently designed around narrow demographics (young, healthy men, anyone?) that don’t reflect the reality of the athletes most of us coach. And more often than not, it gets cherry-picked and paraded around to prove a point rather than to actually inform athletes.
I’ve watched professionals wave citations around like golden tickets:
-
A meta-analysis that “proves” form doesn’t matter, while ignoring the nuance that injury risk is multifactorial.
-
A treadmill study that somehow gets extrapolated to trail ultras.
-
A single study on men, applied to menopausal women as if nothing changes.
That’s not science. That’s control.
And here’s the kicker: even if I had dropped ten citations into the comments of that thread, it wouldn’t have changed a thing. Because it was never about the evidence. It was about posture. About who gets to play expert.
What actually matters — and what doesn’t get enough credit in these conversations — is context. Lived experience. Athlete feedback. Observing how form breaks down at mile 12 of a long run, not just on a treadmill for two minutes. Pairing research with real-world application, instead of reducing it into absolutes that sound tidy on Instagram.
I’m not anti-research. I read it, I value it, and I use it. But I refuse to weaponize it to shut athletes down or to win arguments with colleagues. Because if the “evidence” doesn’t translate to the humans I coach, it’s just words on a page.
Receipts: When “Evidence” Doesn’t Say What They Claim
Take the studies that keep getting thrown around in the “form doesn’t matter” debate:
-
PMID: 32930647 (2021)
Coaches were asked to visually rank runners on running economy. The result? They couldn’t do it. That doesn’t mean form doesn’t matter — it means you can’t eyeball economy from the sidelines. Yet I’ve seen this study twisted into “proof” that biomechanics are irrelevant. -
PMID: 35254562 (2022)
This review concluded that no single biomechanical or musculoskeletal measure alone can predict injuries in non-elite runners. That makes sense — injury risk is multifactorial. But here’s the problem: people cite it as if it “proves” form never matters. In reality, it just proves no one variable explains every injury. That’s nuance, not absolutes. -
And the “systematic review” flex.
Meta-analyses sound bulletproof, but they’re only as strong as the studies they include. Small sample sizes, short follow-up periods, and homogeneous subjects (again, college-aged male runners) all limit what these reviews can actually tell us. When you broaden the pool, you water down the signal — but that doesn’t mean the signal isn’t there.
None of these citations say what they’re being weaponized to say. They say running is complex. That there isn’t a single predictor. That context matters. Which is exactly what I’ve been saying all along.
Evidence-based coaching shouldn’t be about cherry-picking studies to control the narrative. It should be about holding nuance, pairing science with lived experience, and helping athletes find what actually works in their bodies.
That’s where the real work is. And that’s where I’ll stay.