This is the mail archive of the ecos-discuss@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Stack switching


>>>>> "Paul" == Paul D DeRocco <pderocco@ix.netcom.com> writes:

    <snip>
    Paul> I have an application which needs to do some co-operative
    Paul> round-robin multi-tasking, and I'm not sure I can get eCos
    Paul> to do this with multiple threads. The application is a
    Paul> musical synthesizer in which each note is represented by a
    Paul> separate thread, which computes continuous control signals
    Paul> like envelopes and vibrato on behalf of that note.

    Paul> The requirements are:

    Paul> 1. Each thread needs to do one iteration of an apparently
    Paul> continuous calculation, then yield to the remaining threads.

    Paul> 2. The threads may not be time sliced. Each must do a
    Paul> complete iteration of its calculation, and then yield to the
    Paul> other threads. (They may be pre-empted by unrelated higher
    Paul> priority threads, however.)

    Paul> 3. When a new note thread is created, it must be scheduled
    Paul> next in line for execution among these note threads, not
    Paul> last in line, so that the onset of a new note occurs as soon
    Paul> as possible.

    Paul> The reason I want to use threads, instead of a list of
    Paul> functions to call, is that I want each calculation to be
    Paul> able to yield at different points in its program, including
    Paul> inside a nested function call, rather than having to restart
    Paul> at the beginning and rely on state variables to keep track
    Paul> of where its calculation is.

If the threads can be created in advance, or if you can create some
number of worker threads in advance and assign jobs to them
dynamically, then this sort of thing seems straightforward using
standard synchronization primitives. You will need something like:

#define NUM_THREADS 64
static int          running_thread = -1;
static cyg_mutex_t  lock;
static cyg_sem_t    wakeups[NUM_THREADS];
/* add application-specific data structures for keeping track of which
   jobs are pending */

void
yield(void)
{
    int self = running_thread;
    cyg_mutex_lock(&lock);
    if ( /* jobs pending */ ) {
        running_thread = /* decide which thread should run next */ ;
        cyg_semaphore_post(&(wakeups[running_thread]));
    } else {
        running_thread = -1;
    }
    cyg_mutex_unlock(&lock);
    cyg_semaphore_wait(&(wakeups[self]));
}

/* called by worker threads or other code when a new job has to
   be scheduled */
void
add_job(/* application-specific job info */)
{
    cyg_mutex_lock(&lock);
    /* update details of pending jobs */
    if (-1 == running_thread) {
        running_thread = /* decide which thread should run next */ ;
        cyg_semaphore_post(&(wakeups[running_thread]));
    }
    cyg_mutex_unlock(&lock);
}

A worker thread will only run when its semaphore is posted, and a
semaphore is only posted either when a new job is added and currently
none of the workers are running, or when the current worker is
explicitly yielding. Hence at most one of the workers will be runnable
at any time and there is no need to worry about timeslicing. The
worker threads can yield either in an outer loop or deep inside some
calculation code. The remaining code needed, e.g. for initialization,
is left as an exercise for the reader.

There are other ways of achieving the same effect using the standard
synchronization primitives. For example code like the above would
normally be implemented via a mutex and condition variable combo, but
with 64 threads or so you will probably want to avoid having all the
threads wake up and check the condition.

The extra overhead of an eCos thread switch compared with a minimal
stack switch should be negligible, as should the overheads of the
mutex and semaphore operations, unless the amount of calculation done
between yields is very small. Trying to eliminate such overheads early
on in the development process, before you can collect any profiling
data, would be premature optimization.

Bart

-- 
Bart Veer                                 eCos Configuration Architect
http://www.ecoscentric.com/               The eCos and RedBoot experts
http://www.ecoscentric.com/legal        Legal info, address and number

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]