Time graph packet loss height
Why do the red "packet loss" lines have different vertical heights on the graph?
Each "pixel" in the time-graph is comprised of anywhere from 1 to many samples. When you're looking at 5 minutes of data in a time-graph, any sample will be multiple pixels wide (because the number of samples being displayed is limited). In this case, a lost packet will show 100% packet loss on the graph, and the red period will be multiple pixels wide.
Now, let's switch to the other end of the spectrum. Let's say you want to show 48 hours on the time-graph, and during that 48
Now, sometimes you might have each pixel representing 8 samples (or maybe 4, or some similar small non-one number). This will happen if you're looking at one hour of data when the sample interval is 1 second and the width of the graph is 450 pixels. In this case, if one packet was lost, you'll see a jump in red-height of a fixed amount. Another lost packet will make it jump up another "chunk". As you look across a time-graph like this, you'll see a number of red bars of equal height (1 lost sample out of 8, say), and a number of red bars of somewhat taller height (2 lost samples out of 8). These are kind of "plateaus" that seem to generally be hit - that's because we only lose packets in whole number increments. As you increase the amount of data you see at a time, the more smooth these packet loss lines get - and the more "trending" you'll see in packet loss data.
These same concepts with packet loss graph heights also apply to
The graph above shows a perfect example of how this averaging is helpful - you get to see packet loss trends over time.