This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: About Cyg_Scheduler::unlock_inner
Rafael Rodríguez Velilla <rrv@tid.es> writes:
> > Yup. To clarify, the new thread context is not just some random location
> > in the thread's execution. It can only be either that very same location
> > in Cyg_Scheduler::unlock() or a similar piece of code used at the very
> > start of a thread's life.
> >
> > You would expect that a thread which is interrupted and descheduled by a
> > real hardware IRQ, such as timer ticks, had a saved context that points to
> > what the thread was doing at the time. But it's not so, the saved context
> > is in the middle of the interrupt-handling code, doing the same unlock()
> > call as a "normal" yield of the CPU.
> >
> > So when an interrupted thread restarts, it restarts in the middle of the
> > normal interrupt sequence, and does all of: unlocks the scheduler, restores
> > the interrupted state and returns from interrupt, and thus continues where
> > it was interrupted.
>
> Then, the scheduling of new threads only happens when an interrupt ocurrs
> (in the context of interrupt_end)?
> Doesn't cyg_thread_delay produce a rescheduling? (for example)
Yes, of course, sorry. I was clarifying that it's the same code that's
used to change over who owns the CPU in both cases: interrupted thread and
"voluntary" reschedule such as cyg_thread_delay() or waiting for a
semaphore, or signalling a semaphore that wakes up a higher priority
thread.
In the "voluntary" reschedule the Cyg_Scheduler::unlock() is called
directly by application code, in the interrupted case it's called as part
of the interrupt handling mechanism.
- Huge