Eliminate Timeline Pixel Averaging

Posted by: Anonymous

Eliminate Timeline Pixel Averaging - 03/24/02 10:21 AM

I am reposting my thoughts from the support forum...<br><br>I suggest you change the default timeline graph to represent MAXIMUM values with an OPTION to change it to an average.<br><br>The "pixel averaging" scheme you use throws out valuable data. It is also proabably harder to code. ;-) When beating up on an ISP or looking for trouble spots, I want to be able to see worst case performance over a given time period. For example when a sinlge ping results in 100% packet loss, I want to see that tall red line, even in a 24 hour view so I can see what happened. As it is now, a long term view, "Avergaes" the sample around a "pixel" and reduces that critical 100% packet loss condition to backgournd noise.<br><br>Of course being able to toggle it on to represent an avergae would be neat!<br><br>Fantastic piece of software. Just registered today...<br><br>cgardner<br><br><br>
Posted by: Pete Ness

Re: Eliminate Timeline Pixel Averaging - 04/04/02 02:09 AM

We've gotten several requests for this over the years - and if there's enough demand, we may add an option to graph min/max/avg on the time graph.

The big problem with doing this is that "maximum" data isn't a very good way of looking for a problem. If you've collected 1000 samples, and 999 of those samples took 50ms - and 1 of those samples took 500ms, what is the right number to show here? If you're complaining to your ISP and they see a graph with 500 ms on it, they may think there's something wrong there - and without any "qualifying", they may actually investigate something that isn't really a problem (in my book, having .1% of the packets take 500ms and 99.9% take 50ms is a pretty good connection). In reality, you could count that 500ms response as a lost packet - and 0.1% packet loss is pretty good too. PingPlotter is effective because it shows data in an effective manner - and we're a bit hesitant to put an option in that could make the reported data look excessively bad (which this option might do).

Now - as for packet loss percentages. This same thing really applies to packet loss percentages as well. Really, by definition, a "percentage" means that you need to have more than one sample involved (well, to make it meaningful, that is). If you always show 100% packet loss for any period whenver there's a single lost packet, then that's not a real number. If there are 100 samples in a pixel - and only 1 is lost, showing 100% packet loss during that same period would be just wrong. If you want the packet loss to be more apparent, just change the packet loss scale to something besides the default of 30% - to something like 1%. This will make the packet loss show up a lot stronger in the size of the red bar, but would be qualified by the 1% scale shown on the side.

When beating up on your ISP, be *very* careful about showing worst case performance. All internet connections are built to handle occasional problems without you even noticing. The bad stuff really happens when you start to see a lot of these "worst case" situations - and when that happens, PingPlotter will properly show elevated latencies and higher packet loss.

I may be missing the point of what you're trying to do, but it sounds like you're trying to exaggerate the significance and size of your problems. This is not the best way to get your ISP to fix a problem.