This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: REAL meaning of little-endian and big-endian?
- From: Chris Gray <chris dot gray at acunia dot com>
- To: =?gb2312?B?1cUgwcE=?= <johnsonest at hotmail dot com>
- Cc: ecos-discuss at sources dot redhat dot com
- Date: Tue, 17 Dec 2002 09:58:55 +0100 (CET)
- Subject: Re: [ECOS] REAL meaning of little-endian and big-endian?
On Tue, 17 Dec 2002, [gb2312] ÕÅ ÁÁ wrote:
>
> hello,every one!
> I have a "old" and "simple" question (i think like this before):
> 1 both endian mode only differ in byte order? how bit-order in one byte?
If we are talking about data stored in memory, this question is pretty
well meaningless: for most CPUs the smallest amount of data that can be
moved between CPU and memory is a byte, and all 8 bits move at the same
time. So whether you call the most or the least significant bit 'bit 0' is
purely a matter of convention: nowadays everybody calls the least
significant bit 'bit 0', but at one time the US did this and the UK did
the opposite! So an american would say `bit 0' and a brit would say `bit
7', and they would be talking about the same bit ...
If we are talking about how data is transmitted `on the wire', then the
question may or may not be meaningful (think of a parallel cable!), and if
it is then the answer is only really useful if you are monitoring the wire
with an oscilloscope: your software can only see the data as an array of
bytes in memory, individual bits being handled by the hardware UART or MAC
or whatever. I think that most serial communication actually sends the
least significant bit first (and that ISO 3309 calls this 'bit 1'!), but I
may be misremembering.
> 2 if differ in bit order,one byte with little-endian come from
> net,then how me(receiver host with big-endian) to read it correctlly? from
> IP processing,it simplely read it! why?
See above. Unless you are actually `bit-banging' in software, you never
see the individual bits within a byte. In IP, fields of more than one byte
(e.g. the network address) are _always_ transmitted with the most
significant byte first, regardless of the `endianness' of the CPU
involved. So `host byte ordering' may be big- or little-endian, but
`network byte ordering' is always big-endian.
> 3 if identical in bit order,why so define:
>
> struct ip_hdr
> {
>
> #if BYTE_ORDER ==LITTLE_ENDIAN
> unsigned char ip_version;4,
> ip_hlen:4;
> #elif BYTE_ORDER ==BIG_ENDIAN
> unsigned char ip_hlen :4,
> ip_version:4;
> #endif
Very good question! The C standard doesn't specify the order of bitfields
within a byte or word, so the interpretation is up to the writer of the
compiler. The code you quote has probably been tested with gcc on both
big- and little-endian machines, so I suppose that gcc uses a convention
that bitfields are arranged according to the endianness of the CPU. Other
C compilers may behave differently. (I got badly burned by this some 7 or
8 years ago, and I've never used bitfields since. 8-0 So I neither know
nor care what convention gcc is using ...)
Good luck
--
Chris Gray
VM Architect, ACUNIA
--
Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss