On a recent hike, several participants noted the wide difference between elevation gains reported by their mobile apps compared with that published by a review of the hike. I ultimately determined that the difference was sampling intervals. It seems the apps were sampling and/or smoothing to effective 100′ intervals while the review was based on a 50′ interval. The difference was about 1200′ over 18 miles, so not insignificant. And if I adjust the interval in CalTopo to include all track data (~12′ interval) the difference becomes more like 3500′ over those 18 miles.
It is a coastline paradox kind of problem. So what is the “best” interval? Do most apps tend to use 100′ as a generally assumed de facto standard? We use elevation gain when assessing the challenge and exertion of a hike, but that becomes difficult if there are different algorithms or interval standards being used. Is there some approach to this I am missing that makes it more obvious what the “optimum” interval would be to consistently and fairly express elevation expectations?


