This is the mail archive of the
ecos-discuss@sourceware.org
mailing list for the eCos project.
Re: typo in i2c example in reference manual?
On 2007-02-22, Bart Veer <bartv@ecoscentric.com> wrote:
> >> It is assumed that the bitbang function just needs to manipulate a
> >> couple of registers related to GPIO pins, which should be near enough
> >> instantaneous.
>
> Grant> Changing a pin state requires a single instruction on my
> Grant> platform. Still, setting the delay parameter to 0 results
> Grant> in a SCK period of 50,000ns. Setting the delay parameter to
> Grant> a non-zero value adds to that 50,000ns. [I'm running on a
> Grant> NIOS2 CPU at 44MHz.]
>
> So apparently it takes 25us to change a pin state. Sounds like there
> is a big problem somewhere.
It's not a hardware problem -- that 25us is all in the i2c
infrastructure.
The following loop generates an SCK of around 20MHz. (I'm
not sure of the exact frequency, my digital scope has a max
sample rate of 40MHz).
while (1)
{
BitSet(Sck);
BitClr(Sck);
BitSet(Sck);
BitClr(Sck);
BitSet(Sck);
BitClr(Sck);
BitSet(Sck);
BitClr(Sck);
BitSet(Sck);
BitClr(Sck);
BitSet(Sck);
BitClr(Sck);
BitSet(Sck);
BitClr(Sck);
}
Adding two nops slows it down to the point where I can
actually measure it:
while (1)
{
BitSet(Sck);
asm(" nop");
asm(" nop");
BitClr(Sck);
asm(" nop");
asm(" nop");
BitSet(Sck);
[...]
That produces an SCK of 5.6MHz.
Adding in the overhead of calling the "bitbang" function
while (1)
{
dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_HIGH);
dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_LOW);
dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_HIGH);
dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_LOW);
[...]
}
That slows SCK down to about 400KHz.
Add in the layer above that by calling the tx/rx routines, and
the fastest clock rate I can get is about 20KHz.
> Grant> Right, but the reference manual implies that it does when
> Grant> it states that the delay value will be the SCK period. That
> Grant> could only be true if the overhead is either zero or is
> Grant> measured and compensated for.
>
> On processors which have dedicated I2C bus master support (as opposed
> to bitbanging GPIO lines) the delay is likely to be exact since it
> will be used to set a clock register within the I2C hardware. For a
> bit-banged bus the delay should be accurate to within a few percent,
I don't see how that can be true unless you specify long delays
on a very fast processor. For the typical i2c clocks and a
44MHz processor, the overhead isn't negligible -- it's a 10X
larger than the requested delay.
> which should be good enough for all practical purposes. It
> will not be any more accurate than that because HAL_DELAY_US()
> is not expected to be any more accurate than that. There is a
> reasonable assumption here that the low-level bitbang
> operations are sufficiently cheap as to be negligible.
A reasonable assumption? We must be assuming a 1GHz processor
with a cache big enough to hold the entire application.
--
Grant Edwards grante Yow! The PINK SOCKS were
at ORIGINALLY from 1952!! But
visi.com they went to MARS around
1953!!
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss