Thank you for the information. The sensors in question are indeed running in continuous mode and it appears to be able to fetch the data quick enough. I've just now looked at the new data we received from yesterday, and another strange issue has shown up.
First let me explain our setup. We use three schedules. Schedule A is the unload schedule at 00:00 UTC. Schedule B is a second sampling schedule and schedule C is a minute sampling schedule. The data seems to be sampled somewhat consistently with no actual missing data points (except for at the end) and with a delta time ± a few hundred milliseconds for the most part. Here's an example of our data:
Second sampling:
2016/03/10 22:49:59.001,n,106,0.9,-0.6,85.6,1013.2
2016/03/10 22:50:00.002,n,107,0.9,-0.6,85.7,1013.2
2016/03/10 22:50:01.226,n,113,0.8,-0.6,85.7,1013.2
2016/03/10 22:50:02.000,n,109,0.9,-0.6,85.7,1013.2
2016/03/10 22:50:03.000,n,109,0.9,-0.6,85.7,1013.2
2016/03/10 22:50:04.001,n,107,0.9,-0.6,85.8,1013.2
2016/03/10 22:50:05.002,n,109,1,-0.6,85.8,1013.2
Minute sampling:
2016/03/10 22:48:00.589,n,,,,,,0,0,0,0,0,0,0,0,3.1,6,6.5,3.528
2016/03/10 22:49:00.589,n,,,,,,0,0,0,0,0,0,0,0,3.1,6,6.5,3.53
2016/03/10 22:50:00.590,n,,,,,,0,0,0,0,0,0,0,0,3.1,6,6.5,3.529
2016/03/10 22:51:00.594,n,,,,,,0,0,0,0,0,0,0,0,3,6,6.4,3.528
2016/03/10 22:52:00.591,n,,,,,,0,0,0,0,0,0,0,0,3,6,6.5,3.528
2016/03/10 22:53:00.596,n,,,,,,0,0,0,0,0,0,0,0,2.8,6,6.5,3.529
Now for the confusing part. The .csv file which was unloaded at UTC midnight had the filename timestamped as 00:00:02. According to the server it was created at 00:14:00. These are the last samples in it:
Last second sample:
2016/03/11 00:00:09.007,n,240,1.1,-0.5,80,1013.4
Last minute sample:
2016/03/11 00:13:00.591,n,,,,,,0,0,0,0,0,0,0,0,2.1,6,6.6,3.527
The unloaded data has been deleted, and now the DT-80 shows the Oldest records to be at 00:13:10 and 00:14:00. We seem to have lost 13 minutes of second samples.
Do you have any idea what could be causing this issue? We have NTP set to update every hour so I don't see how that could be the problem.
Thank you for the information. The sensors in question are indeed running in continuous mode and it appears to be able to fetch the data quick enough. I've just now looked at the new data we received from yesterday, and another strange issue has shown up.
First let me explain our setup. We use three schedules. Schedule A is the unload schedule at 00:00 UTC. Schedule B is a second sampling schedule and schedule C is a minute sampling schedule. The data seems to be sampled somewhat consistently with no actual missing data points (except for at the end) and with a delta time ± a few hundred milliseconds for the most part. Here's an example of our data:
_Second sampling:_
_2016/03/10 22:49:59.001,n,106,0.9,-0.6,85.6,1013.2
2016/03/10 22:50:00.002,n,107,0.9,-0.6,85.7,1013.2
2016/03/10 22:50:01.226,n,113,0.8,-0.6,85.7,1013.2
2016/03/10 22:50:02.000,n,109,0.9,-0.6,85.7,1013.2
2016/03/10 22:50:03.000,n,109,0.9,-0.6,85.7,1013.2
2016/03/10 22:50:04.001,n,107,0.9,-0.6,85.8,1013.2
2016/03/10 22:50:05.002,n,109,1,-0.6,85.8,1013.2_
_Minute sampling:_
_2016/03/10 22:48:00.589,n,,,,,,0,0,0,0,0,0,0,0,3.1,6,6.5,3.528
2016/03/10 22:49:00.589,n,,,,,,0,0,0,0,0,0,0,0,3.1,6,6.5,3.53
2016/03/10 22:50:00.590,n,,,,,,0,0,0,0,0,0,0,0,3.1,6,6.5,3.529
2016/03/10 22:51:00.594,n,,,,,,0,0,0,0,0,0,0,0,3,6,6.4,3.528
2016/03/10 22:52:00.591,n,,,,,,0,0,0,0,0,0,0,0,3,6,6.5,3.528
2016/03/10 22:53:00.596,n,,,,,,0,0,0,0,0,0,0,0,2.8,6,6.5,3.529_
Now for the confusing part. The .csv file which was unloaded at UTC midnight had the filename timestamped as 00:00:02. According to the server it was created at 00:14:00. These are the last samples in it:
_Last second sample:
2016/03/11 00:00:09.007,n,240,1.1,-0.5,80,1013.4_
_Last minute sample:
2016/03/11 00:13:00.591,n,,,,,,0,0,0,0,0,0,0,0,2.1,6,6.6,3.527_
The unloaded data has been deleted, and now the DT-80 shows the Oldest records to be at 00:13:10 and 00:14:00. We seem to have lost 13 minutes of second samples.
Do you have any idea what could be causing this issue? We have NTP set to update every hour so I don't see how that could be the problem.