This is the mail archive of the
ecos-discuss@sourceware.cygnus.com
mailing list for the eCos project.
RE: bogus clock interrupt handling numbers?
- To: Clint Bauer <CLBAUER at INTELECTINC dot COM>
- Subject: RE: [ECOS] bogus clock interrupt handling numbers?
- From: Gary Thomas <gthomas at cygnus dot co dot uk>
- Date: Wed, 03 Nov 1999 10:33:45 -0700 (MST)
- Cc: ecos-discuss at sourceware dot cygnus dot com
On 01-Nov-99 Clint Bauer wrote:
> Sorry to be dense. The numbers I am seeing for tv[] after
>
> // overhead calculations
> for (i = 0; i < nsamples; i++) {
> HAL_CLOCK_READ(&tv[i]);
> }
>
> are for instance,
> tv[0] = 4
> tv[1] = 4
> tv[2] = 4
> ...
> tv[31] = 4
>
> This leads to the result of zero ticks of overhead, and seems plausible
> given the clock interval period (10 ms for me in this case), and the fact
> the other evaluation boards also get this calculation.
>
No, you misunderstand these calculations. I'm sorry that the nomenclature
is confusing but I'll try again.
eCos has a system clock which is running [typically] at 100Hz. This "clock"
is based on some hardware device, normally a timer of some sort. The timer
typically is being driven by hardware "ticks" - signals that tell it to count
up or down - at some rate. This rate is normally *much* higher than 100Hz.
Thus, we program the timer in such a way that the system gets an interrupt
after some number of these hardware "ticks" have occurred. If the hardware
timer was running at 1MHz, then we would use a value of 10000 ticks / interrupt.
The 'HAL_CLOCK_READ()' function is a way of reading this hardware timer. It
is designed to return the number of hardware "ticks" since the last interrupt.
Thus, in this example, it could have a value from 0..9999, representing (again
in this idealized example) time from 0us to 9999us since the last interrupt.
'tm_basic' uses this value to calculate a number of critical timings. Using
this value, we can measure the overhead involved in certain operations, such
as a "C" function call, a loop, etc. On some systems though, the CPU may be
capable of executing many of these functions faster than the hardware timer
"ticks". The JMR3904 can execute a large number of instructions within the
span of a single tick, thus you get the overhead value of 0. Some care is
taken to reduce the error inherant in these calculations, such as performing
a measured calculations a large number of times and taking the average. Also,
most values calculated/used within 'tm_basic' are actually in nano-seconds
(1e-9), with the results converted to micro-seconds (1e-6) for display.
The problem with the nomenclature is that sometimes "ticks" (as I have just
described them) are called "clocks", and timer interrupts are called "ticks",
and so on. It can be quite confusing.
> For the overhead calculation -
>
> for (i = 0; i < nsamples; i++) {
> tick0 = cyg_current_time();
> while (true) {
> tick1 = cyg_current_time();
> if (tick0 != tick1) break;
> }
> HAL_CLOCK_READ(&tv[i]);
> }
>
> The observed values are
> tv[0] = 19
> tv[1] = 20
> tv[2] = 21
> ...
> tv[31] = 50
>
> Each value is one greater than previous (you are waiting until the kernel is
> informed of a clock increment, before reading the value). Since there is no
> overhead in reading the values, (from first test), the values seem valid to
> me.
>
I just tried this on the JMR3904 in our test farm. Here are the values I saw:
Reading the hardware clock takes 0 'ticks' overhead
... this value will be factored out of all other measurements
Sample 0: 43
Sample 1: 49
Sample 2: 39
Sample 3: 39
Sample 4: 39
Sample 5: 39
Sample 6: 42
Sample 7: 39
Sample 8: 39
Sample 9: 39
Sample 10: 39
Sample 11: 41
Sample 12: 39
Sample 13: 39
Sample 14: 39
Sample 15: 39
Sample 16: 42
Sample 17: 39
Sample 18: 39
Sample 19: 39
Sample 20: 39
Sample 21: 41
Sample 22: 39
Sample 23: 39
Sample 24: 39
Sample 25: 39
Sample 26: 42
Sample 27: 39
Sample 28: 39
Sample 29: 39
Sample 30: 39
Sample 31: 41
Clock interrupt took 25.98 microseconds (39 raw clock ticks)
As you can see, the result of 'HAL_CLOCK_READ(&tv[i])' are nearly
constant, not ascending. These values represent the number of hardware
"ticks" to the timer since the last interrupt. This is a useful
[and fairly accurate] indication of how long the clock interrupt
processing takes.
If you don't see values like this, we need to know. What version of
eCos are you using/testing from? (My values are based on the latest
version which is available from sourceware.cygnus.com).