This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: Thoughts of semaphore filesystem device to would behave like sock etpair() and/or pipe().


Hi Nick(all)-

Thanks for the feedback.  Again, I'm not sure I will (or anyone from our group) will get a chance to work on implementing a FIFO filesystem (as decribed below), but beforehand I'll spare you any trivial questions about device driver implementations....need to read all the eCos docs first.  After that, I hope to have a common vocabulary and background so I (or someone within my group) can ask you "good" questions.

Phil



> -----Original Message-----
> From: nickg@miso.calivar.com 
> [mailto:nickg@miso.calivar.com]On Behalf Of
> Nick Garnett
> Sent: Wednesday, January 28, 2004 8:50 AM
> To: Fleege, Philip T
> Cc: ecos-discuss@sources.redhat.com
> Subject: Re: [ECOS] Thoughts of semaphore filesystem device to would
> behave like sock etpair() and/or pipe().
> 
> 
> Philip.Fleege@gd-ais.com writes:
> 
> > Hi Nick(all)-
> > 
> > Thanks for the quick responses....
> > 
> > Ok, I like the idea of creating a special "semaphore" filesystem.
> > My background brings me here with more from the applications
> > development and thus the posix layer...not much driver experience;
> > but always wanting to learn.  So we may want to persue this. For
> > now, I have the work around completed; using two full blown INET
> > sockets via 'localhost'...yuck!
> > 
> > Although, your assertion that the primary requirement is to unblock
> > select() is true, in addition we need to use the "tickle" file
> > descriptor (that the target thread is select()ing on), to pass
> > control data, i.e. we send about 32 bytes of "command/context" data
> > so the target threads knows 'why' it is being unblocked (not just
> > that he needs to unblock).  Therefore, the ideal solution would look
> > something like a pipe where this command data could reside with the
> > device until it is read by the one target thread (the one blocking
> > on select()).
> 
> 
> You could always pass the reason for the wakeup via shared global
> memory. 
> 
> > 
> > You had indicated that having data buffered within this device is
> > not optimal; and I understand why.  However, if we could have this
> > and have it configurable, this could be a very powerful device
> > (realative to the specific use case described above).  In this way,
> > client threads (threads writing to the 'tickle' device) could write
> > their command data without blocking (until the device buffer is
> > full; or returns with errno set to EWOULDBLOCK if the write side is
> > configured as non-blocking and the buffer is full).
> >
> 
> You are talking here about a full FIFO filesystem. I was hoping to
> avoid the need to implement that to achieve your goals. If such a
> thing were to be implemented then it must be done properly.
> 
> 
> > The buffer size configuration could be defaulted to zero; for those
> > who do not need to send data and/or who do not want to pay the price
> > for the allocation/space.  This configuration could be a 'static'
> > ecos.ecc/package config option.  Going for the "dream solution", a
> > special ioctl() command could be defined that would allow dynamic
> > runtime modification of this buffer size.  This would be useful for
> > those occations where several instances of this device are
> > created/realized but each use has a different size requirement.
> >
> 
> I think we could only have one instance of the device, and a
> configuration-time buffer size would be best to allow the memory usage
> of the system to be deterministic.
> 
> > By the way, I would like to be able to allocate a non-predetermined
> > number of these devices.  In otherwords, like pipe() or
> > socketpair(), I need to be able to open multiple instances of this
> > type of device...up to the max allowed file descriptors.  Each
> > open()'ed instance has it's own semaphore count and optional data
> > buffer.
> 
> The main problem with that approach is that it would require use of
> the memory allocator package. For most of the basic systems in eCos we
> prefer to avoid mandating the use of malloc() since not all platforms
> are suitable for it, and not all applications want to pay the price or
> have the available memory. However, a config option that allows it to
> switch from a static memory implementation to a malloc based one
> would be acceptable. The RAM filesystem has this, for example.
> 
> > 
> > Again, I haven't begun to look into the implementation details of an
> > eCos driver.  I've been working at getting our linux/vxworks code
> > ported... I'm not sure if what I have outlined above is reasonalable
> > or even do-able... but I tried to better clarify our needs.
> 
> It is certainly reasonable and doable, but you are going to have to do
> it yourself. I'm happy to answer questions about the FILEIO subsystem
> and other issues, and make design suggestions.
> 
> > 
> > Does this sound reasonable to you?  If so, would this be something
> > that the eCos community would support development on.  In
> > otherwords, I'm not sure how things get done...I'm not sure if you
> > would want me doing it...or when I would(or could ever) get it done.
> > If I were to attempt this, would it be something that you (or the
> > eCos community) would help support me on doing?
> 
> Andrew has given a good outline of what you should do. Getting the
> overall design right is important, that way it can be included into
> the main source tree with minimal fuss.
> 
> -- 
> Nick Garnett                    eCos Kernel Architect
> http://www.ecoscentric.com      The eCos and RedBoot experts
> 

-- 
Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]