This is the mail archive of the ecos-discuss@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: typo in i2c example in reference manual?


On 2007-02-19, Bart Veer <bartv@ecoscentric.com> wrote:

>    Grant> 1) The "delay" that's specified appears to be just added on to
>    Grant>    the intrinsic overhead of a bit-banged driver.  Is this the
>    Grant>    Specifying a delay of 10,000ns on my platform results in an
>    Grant>    actual clock period of about 59,000ns.  The description of
>    Grant>    the delay parameter in the reference manual appears to
>    Grant>    assume that there is zero overhead involved in the driver.
>    Grant>    Is this the expected behavior?
>
> It is assumed that the bitbang function just needs to manipulate a
> couple of registers related to GPIO pins, which should be near enough
> instantaneous.

Changing a pin state requires a single instruction on my
platform.  Still, setting the delay parameter to 0 results in a
SCK period of 50,000ns.  Setting the delay parameter to a
non-zero value adds to that 50,000ns.  [I'm running on a NIOS2
CPU at 44MHz.]

> If for some reason the operation is more expensive, there
> would be no easy way to measure that and allow for it.

Right, but the reference manual implies that it does when it
states that the delay value will be the SCK period.  That could
only be true if the overhead is either zero or is measured and
compensated for.

> Hence the specified delay is just used to generate the
> HAL_DELAY_US() parameter. Developers still have some control
> since they fill in the delay field when instantiating an I2C
> device.

Yup.  I've set it to 0, and I get an SCK of 20KHz.  I suppose I
could trace execution through the i2c routines and try to
figure out where the time is going.

>    Grant> 2) There doesn't seem to be any way to determine when writing
>    Grant>    zero bytes of data with cyg_i2c_tx() whether the operation
>    Grant>    was successful or not, since it returns 0 for both cases.  I
>    Grant>    presume one should use the lower-level "transaction"
>    Grant>    routines for this case?
>
> Under what circumstances does it make sense to write zero bytes of
> data?

The datasheet for the EEPROM I'm using states that in order to
determine if a write cycle has completed, one should should
send an address/control byte with the r/*w bit cleared.  If
that byte is acked, then the write cycle is finished.  If it
isn't then the write cycle is still in progress.  I've
determined that sending an extra byte after the control byte
doesn't seem to hurt anything, but I'd prefer to do things
according to the datasheet.

>    Grant> How do I send a single byte on the i2c bus??
>
> I suspect you are setting the start flag. That means the I2C
> code has to send the device address and the direction bit
> before the byte of data.

Yup.  That's what I finally deduced.

> I2C does not have the concept of sending a raw byte of data
> onto the bus. Data must always be addressed to a device on the
> bus, which means sending address bytes. The address byte also
> includes one bit for the direction, so that the addressed
> device knows whether it should accept or transmit data.

If I'm going to poll the device to see if it's done with a
program cycle, according to the device's datasheet, I need to
send start + address/write and check for the ACK. AFAICT, I can
only do that by specify 0 data bytes, but then I can't tell if
the address byte was ACKed or not since both cases return 0.

My testing seems to indicate that sending a single byte after
the address byte doesn't hurt anything (all it does is set an
internal register value that will be changed later anyway).

>    Grant> It will be called from both driver init() routines and from
>    Grant> threads. How tell the difference so that the function can
>    Grant> call HAL_DELAY_US() in the former case and
>    Grant> cyg_thread_delay() in the latter?
>
> cyg_thread_delay() generally operates in terms of many
> milliseconds.

I know.

> Typically low-level device drivers do not deal with things on
> such long timescales, instead that is left to higher-level
> code or the application.

Except there are operations in driver init() methods that may
need delays of several milliseconds in order to detect whether
or not peripheral are installed and/or working properly.

> Instead typical device drivers need delays of the order of
> microseconds, which always requires HAL_DELAY_US() rather than
> cyg_thread_delay().

Right.  But, I have a routine that requires millisecond delays
that is called from driver init() functions, RedBoot, and
normal threads which may or may not have the scheduler locked.

Using HAL_DELAY_US all the time would be bad for performance
during normal thread calls.  Using cyg_thread_delay() won't
work for the init() and locked-scheduler cases.

I know how to check the scheduler lock.  I know how to
determine if the function is being compiled for RedBoot.  What
I hadn't figured out is how to tell whether the scheduler has
been started or not.

> If there is a valid reason for having milliseconds of delay
> inside driver code,

There is.  I need to time-out in initialization code if
peripherals don't respond (they may not actually be there).

> the best bet is to check whether or not interrupts are
> enabled. Typically that does not happen until the scheduler is
> started and threads begin to run.

Thanks.  That should be pretty simple.

-- 
Grant Edwards                   grante             Yow!  Are you selling NYLON
                                  at               OIL WELLS?? If so, we can
                               visi.com            use TWO DOZEN!!


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]