Some example graphs may help (shoot 'em off to me in e-mail and I'll include them in my reply if you like).<br><br>Let's say that at 24 hour scale, 20 samples are included in every "pixel" width. Let's say you have 20 samples in a row that were lost. As you increase your time scale from 1 hour to 24 hours, the number of samples in any pixel will increase - from 1 at 1 hour (just roughly) to 20 at 24 hours. During this scaling process, 100% of the packets in a pixel width are lost, so packet loss shows at full height.<br><br>Now, let's pop the scale up one more - to 48 hours. At 48 hours, there are 40 samples in any pixel. Because we only lost 20 samples, the other 20 samples were successful - so we only lost 50% of the packets sent out in this time period. We don't actually drop below 100% loss up until that point - because up until that point, we'd always lost more samples than were averaged into a single pixel width.<br><br>I may be misunderstanding your question - and some pictures might help me a lot. Feel free to correct my misunderstanding in that case.<br><br>