Ahhh - a *great* question!<br><br>First off, saving always saves exactly the data in memory - so if you've only got 5760 (ish) samples in memory, then that's all that will be saved to file.<br><br>What I do is to set my "maximum samples" to several days (I regularily do 2.5 second intervals, and have my "maximum samples" set to 100000). I also set it up to save every 30 minutes, and set my file name to c:\ppdatadir\$host\$host $date. This groups my data by host into its own directory, and saves one file per day ($date being the same as $year-$month-$day). <br><br>For 100K samples, at 2 bytes per hop / sample, a 15 hop route will take about 3 megs of RAM - pretty reasonable even if you've got multiple instances.<br><br>Now, the problem with this approach is that you actually do have to have enough data so that you can get the maximum period of time you ever graph in memory. Since I have my maximum graph scale set to the default of 48 hours, this setup works for every situation I need (although this may not be the case for you).<br><br>The next version of Ping Plotter addresses this by allowing you to load multiple files that are targetted at the same destination - that way you could load up 5 single-day files and have 5 continuous days of data in memory for analysis. A first beta of this release should be available "soon" - but doesn't have a specific release date yet.<br><br>Hopefully, this gives you some ideas about how you can get the results you need. Feel free to comment or re-direct if I missed your point.<br><br><br>