This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: atomic singly linked lists


Sorry for the long delay in responding....



Nick Garnett writes:

> > I was looking at the new cyg_spinlock stuff in the kernel, and noticed
that
> > the uniprocessor version of the functions don't disable scheduling or
block
> > DSRs when a lock is acquired.  In order for spinlocks to be useful,
they
> > need to be able to lock between DSRs and "thread" level code (think of
a
> > producer/consumer model for example).  On a uniprocessor machine, that
> > means the scheduler and DSR dispatching need to be disabled while any
lock
> > is acquired.  On an MP machine, scheduling and DSR dispatching would be
> > disabled on the CPU that acquires the spinlock, and other CPUs would
use
> > the spin mechanics to synchronize against the CPU that owns the lock.

> To keep things simple I took the approach that the basic spinlock
> would not make any statement about safety with respect to interrupts
> and DSRs. It only functions to provide synchronization between
> CPUs.

> If the user needs interrupt-safety he can use the *_intsave()
> variants, which blocks all local preemption. I guess there could also
> be *_schedlock() variants that would behave as you suggest. But given
> the relative cost of the scheduler lock mentioned above, and the
> intention that the spinlocks should only be held for very brief
> lengths of time, this did not seen necessary. Also, the scheduler lock
> is global to all CPUs, since it protects shared data, and so is less
> useful for very low level synchronization purposes.

I definitely vote for adding a *_schedlock() varient to spinlocks.  Like I
said above, if you are synchronizing between thread level code and DSR
code,
you have to have that kind of functionality.  I think it would be a better
design
if these kind of functions are provided by the OS instead of user code
doing a
schedule lock followed by a spinlock acquisition because I'd be afraid of
running into race conditions.  I think the *_schedlock() functions would
also
be more future proof if details of SMP / scheduling / DSRs, etc. change
(such
as if the scheduler lock were changed to be per CPU instead of per system).

> > On a related topic, I think it would be extremely nice while making SMP
> > enhancements to extend the DSR mechanism to be a general purpose (ala
NT's
> > equivalent DPCs).  This would allow you to define DSRs for any old
purpose
> > and be able to queue them whenever you wanted.  In an SMP environment,
this
> > can allow better parallelism.

> I have thought about this, and decided that I prefer to keep the
> system as simple as possible. Once DSRs become a general purpose
> mechanism you get into questions of prioritizing them and priority
> inversion rears its ugly head. This is a can of worms I prefer not to
> open. Anything more complex than what DSRs currently do is probably
> best done in threads, where it can be scheduled properly.

I agree that adding a priority mechanism to DSRs would not be a good thing
to do,
and I don't advocate that.  I think, though, that you can make them a
general
purpose mechanism and have them be very useful without adding priority
stuff.
I'll bring up NT again: DPCs in NT have no documented priority mechanism,
and
even with that restriction, they are useful (in fairness, there is an
undocumented
way to queue a DPC at the head of the DPC queue as opposed to the tail of
the queue,
but its almost never used).  I personally have been able to get much better
performance out of an SMP system by making use of DPCs (on a uni system,
there's no
benefit, but for an SMP system, it can pay off).  Trying to do the same
thing by
using threads and the context switch overhead it implies wouldn't have
worked.
Hopefully you might reconsider your decision?

Andre Asselin
IBM ServeRAID Software Development
Research Triangle Park, NC


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]