This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: Time slice is not happening quiet alright
- From: "Mike A" <embeddedeng at hotmail dot com>
- To: gthomas at ecoscentric dot com
- Cc: ecos-discuss at sources dot redhat dot com
- Date: Fri, 30 Aug 2002 22:08:17 +0000
- Subject: Re: [ECOS] Time slice is not happening quiet alright
- Bcc:
Hi Gary,
Sorry for the over sized junk (program output) in my last email.
Thanks for the reply.
So, You mean that there is some code within printf that blocks the scheduler
(something like a schedule_lock), or which prevents the running task from
giving up the CPU.
Wonderful!
Now one more question.
Is there a document that talks about this? If there is let me know.
Thanks,
-Mike.
From: Gary Thomas <gthomas@ecoscentric.com>
To: Mike A <embeddedeng@hotmail.com>
CC: eCos Discussion <ecos-discuss@sources.redhat.com>
Subject: Re: [ECOS] Time slice is not happening quiet alright
Date: 30 Aug 2002 07:34:39 -0600
On Thu, 2002-08-29 at 11:28, Mike A wrote:
> Hi,
>
> I wrote a small test program to study about the accuracy of time slicing
> (Round robin scheduling) with ecos. The results show that the time-slice
or
> scheduling was happening at irregular time intervals.
>
> I created two threads of the same priority & started them. Ideally each
of
> them should run alternately for exactly 5 ticks (5 ticks is the
configured
> time-slice). I guess the scheduler was getting called at every 5 ticks,
but
> some times the thread switching was not happening, so the running thread
ran
> continuously for 2 slots & some time 3, 4, 5, & so on…
>
> Why is this happening? Am I missing some thing here?
>
First of all, sending 113KB of output just to show that thread switching
wasn't happening like you expected was a bit excessive. A few hundred
lines would have sufficed.
The problem with your test case is that "printf()" perturbs the
whole process. It is quasi-interrupt driven in this case and causes
the scheduling to change.
In a purely compute driven environment, this code would run like this:
thread 0 - runs for 5 ticks
thread 1 - runs for 5 ticks
thread 0 - runs for 5 ticks
thread 1 - runs for 5 ticks
...
However, what happens is this:
thread 0 - runs for 5 ticks, printing all the while
thread 1 - tries to run, but is blocked when it tries to print because
thread 0 hadn't finished it's last printf().
thread 0 - runs for another 5 ticks
thread 1 - tries to run, maybe it gets through printf(), maybe not,
...
If you look carefully at your output, you'll see something like this:
0:118
0:1:120
1:121
1:122
1:123
1:124
1190:125
which shows how the output can get confused.
There's no good way to get around this. Remember what timeslice
scheduling means: a thread gets to run for N ticks, or until it gives
up the CPU. Whenever there is some operation (such as printf()) that
potentially gives up the CPU, the pure timeslicing property will be
abandoned.
Try this program, which does not have printf() within the threads,
to see that things really do work.
------------------------------------ test program ------------------------
#include <cyg/kernel/kapi.h>
#include <cyg/infra/diag.h>
#define NUM_THREADS 4
#define NUM_SAMPLES 32
cyg_thread thread_s[NUM_THREADS]; /* space for thread objects */
cyg_handle_t thread_h[NUM_THREADS];
char stack[NUM_THREADS][4096]; /* space for two 4K stacks */
cyg_thread_entry_t simple_program;
int thread_data[NUM_THREADS][NUM_SAMPLES];
void cyg_user_start(void)
{
int indx;
char name[32];
printf("Testing thread scheduling\n");
for (indx = 0; indx < NUM_THREADS; indx++) {
diag_sprintf(name, "Thread %d", indx);
cyg_thread_create(4, simple_program, (cyg_addrword_t) indx,
name, (void *) stack[indx], 4096,
&thread_h[indx], &thread_s[indx]);
cyg_thread_resume(thread_h[indx]);
}
}
void simple_program(cyg_addrword_t indx)
{
int start_time = (int)cyg_current_time();
int new_time;
int sample = 0;
int ctr;
while (sample < NUM_SAMPLES) {
thread_data[indx][sample++] = start_time;
while ((new_time = (int)cyg_current_time()) == start_time) {
// Do something useful...
}
start_time = new_time;
}
printf("Thread %2d: ", indx);
for (sample = 0, ctr = 0; sample < NUM_SAMPLES; sample++) {
printf("%4d ", thread_data[indx][sample]);
if (++ctr == 16) {
ctr = 0;
printf("\n ");
}
}
printf("\n");
cyg_thread_exit();
}
-----------------------------------------------------------------------
The output from this program is:
Testing thread scheduling
Thread 0: 0 2 3 4 5 21 22 23 24 25 41 42 43
44 45 61
62 63 64 65 81 82 83 84 85 101 102 103
104 105 121 122
Thread 1: 6 7 8 9 10 26 27 28 29 30 46 47 48
49 50 66
67 68 69 70 86 87 88 89 90 106 107 108
109 110 126 127
Thread 2: 11 12 13 14 15 31 32 33 34 35 51 52 53
54 55 71
72 73 74 75 91 92 93 94 95 111 112 113
114 115 128 129
Thread 3: 16 17 18 19 20 36 37 38 39 40 56 57 58
59 60 76
77 78 79 80 96 97 98 99 100 116 117 118
119 120 130 131
You'll see that thread 0 runs, then thread 1, then thread 2, then thread 3,
then thread 0 again, etc. Just as how it should be, without the
perturbations
induced by using printf().
Note that there is an anomaly with the time which goes from 0->2. This is
because
interrupts are off for a little while when the program starts and eCos
catches up
as soon as they are on.
Also note that your original version runs much better if you use
'diag_printf()'
instead since it can't be interrupted/blocked [or at least not as likely].
--
------------------------------------------------------------
Gary Thomas |
eCosCentric, Ltd. |
+1 (970) 229-1963 | eCos & RedBoot experts
gthomas@ecoscentric.com |
http://www.ecoscentric.com/ |
------------------------------------------------------------
_________________________________________________________________
Send and receive Hotmail on your mobile device: http://mobile.msn.com
--
Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss