![]() ![]() If this is not the issue, then the next step would be to toggle an output while you're in loop(), and measure the frequency of the resulting square wave with a frequency meter (some inexpensive DVMs can do this) or a 'scope. If you didn't account for that but assumed 50ms/sample, you'd see an apparent speed up of of the data. Are you a assuming a 20Hz sample rate? Your loop could be taking a fair bit longer than 50ms, which you would see in the printed times, but those should still track the clock time. You said your sampled signal appears higher in frequency than what you expected, which could happen if your sample rate is lower than you intended. Otherwise, millis() should return much more accurate time than the 3x errors you described. If interrupts are turned off for any significant fraction eHealth.getECG() call duration, millis()'s count could fall behind. ![]() If you actually seeing 30s showing as 10s, then there is something else at work. We'd really need to see the timestamps you are getting. If you actually want to read values every 50ms, a much better way of implementing this is to do the following static long lastUpdate ![]() This means that the constants used in the millis() implementation are wrong and the times are wrong. Return analog0 = (float)analog0 * 5 / 1023.0 ĪnalogRead() is slow, but not so slow as to impact a loop like this.Īnother problem I have seen people have is when they change the clock speed but don't correctly change boards.txt. This doesn't sound like your problem.Īnother potential issue would be what getECG() is doing - it might be very slow. The above means that you might be about 1ms out when using millis(). It is pretty close though, and as long as temperature doesn't change too much, is relatively stable. This can be seen in the implementation of the TIMER0_OVF (timer 0 overflow) interrupt handler.Īnother source of inaccuracy is the oscillator/crystal itself, which is not exactly 16MHz. This error gradually accumulates until a correction is made. Each tick of the timer is not exactly 1ms, but is 1.024ms. That isn't to say that millis() is totally accurate either. Millis() is interrupt driven so delay() won't impact it, at least not on an ATmega based board. The loop routine runs over and over again forever: The setup routine runs once when you press reset: So I want to know if that is the reasoning for this mismatch of timing and if so, how do I fix this so that I can keep the time each sample occurs? For example 30 seconds in real life only comes out as 10 seconds (made up example).Īm I correct in saying that the Arduino delay function affects the time keeping using millis()? In other words suppose I have a delay of 50ms, does that mean the millis() function stops for that duration as well and then continues and so on for the duration of the connection? I noticed this when I tried plotting some data and finding that the frequency of the peaks in my data was too frequent given the time which had passed by. However, I noticed that the timing isn't correct. In my Arduino sketch I also used the millis() function so I can keep track of the time at which each value I am measuring is taken. I have been using the Arduino to record some data.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |